00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3468 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3079 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.009 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.029 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.043 Using shallow fetch with depth 1 00:00:00.043 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.043 > git --version # timeout=10 00:00:00.058 > git --version # 'git version 2.39.2' 00:00:00.058 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.060 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.060 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.150 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.160 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.170 Checking out Revision 71481c63295b6b9f0ecef6c6e69e033a6109160a (FETCH_HEAD) 00:00:02.170 > git config core.sparsecheckout # timeout=10 00:00:02.180 > git read-tree -mu HEAD # timeout=10 00:00:02.195 > git checkout -f 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=5 00:00:02.214 Commit message: "jenkins/jjb-config: Disable bsc job until further notice" 00:00:02.215 > git rev-list --no-walk 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=10 00:00:02.456 [Pipeline] Start of Pipeline 00:00:02.471 [Pipeline] library 00:00:02.473 Loading library shm_lib@master 00:00:02.473 Library shm_lib@master is cached. Copying from home. 00:00:02.488 [Pipeline] node 00:00:02.510 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.512 [Pipeline] { 00:00:02.522 [Pipeline] catchError 00:00:02.523 [Pipeline] { 00:00:02.535 [Pipeline] wrap 00:00:02.543 [Pipeline] { 00:00:02.551 [Pipeline] stage 00:00:02.553 [Pipeline] { (Prologue) 00:00:02.729 [Pipeline] sh 00:00:03.017 + logger -p user.info -t JENKINS-CI 00:00:03.037 [Pipeline] echo 00:00:03.039 Node: WFP20 00:00:03.047 [Pipeline] sh 00:00:03.349 [Pipeline] setCustomBuildProperty 00:00:03.358 [Pipeline] echo 00:00:03.359 Cleanup processes 00:00:03.364 [Pipeline] sh 00:00:03.647 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.647 3354239 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.659 [Pipeline] sh 00:00:03.939 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.939 ++ grep -v 'sudo pgrep' 00:00:03.939 ++ awk '{print $1}' 00:00:03.939 + sudo kill -9 00:00:03.939 + true 00:00:03.951 [Pipeline] cleanWs 00:00:03.958 [WS-CLEANUP] Deleting project workspace... 00:00:03.958 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.964 [WS-CLEANUP] done 00:00:03.967 [Pipeline] setCustomBuildProperty 00:00:03.976 [Pipeline] sh 00:00:04.254 + sudo git config --global --replace-all safe.directory '*' 00:00:04.310 [Pipeline] nodesByLabel 00:00:04.311 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.318 [Pipeline] httpRequest 00:00:04.322 HttpMethod: GET 00:00:04.323 URL: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:04.329 Sending request to url: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:04.331 Response Code: HTTP/1.1 200 OK 00:00:04.332 Success: Status code 200 is in the accepted range: 200,404 00:00:04.332 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:05.642 [Pipeline] sh 00:00:05.920 + tar --no-same-owner -xf jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:05.938 [Pipeline] httpRequest 00:00:05.942 HttpMethod: GET 00:00:05.943 URL: http://10.211.164.101/packages/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:00:05.943 Sending request to url: http://10.211.164.101/packages/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:00:05.959 Response Code: HTTP/1.1 200 OK 00:00:05.960 Success: Status code 200 is in the accepted range: 200,404 00:00:05.960 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:01:22.472 [Pipeline] sh 00:01:22.758 + tar --no-same-owner -xf spdk_dafdb289f5521a85d804cfd0a1254835d3b4ef10.tar.gz 00:01:25.345 [Pipeline] sh 00:01:25.630 + git -C spdk log --oneline -n5 00:01:25.630 dafdb289f raid: allow re-adding a base bdev with superblock 00:01:25.630 b694ff865 raid: add callback to raid_bdev_examine_sb() 00:01:25.630 30c08caa3 test/raid: always create pt bdevs in rebuild test 00:01:25.630 e2f90f3c7 test/raid: remove unnecessary recreating of base bdevs 00:01:25.630 bad11eeac raid: keep raid bdev in CONFIGURING state when last base bdev is removed 00:01:25.649 [Pipeline] withCredentials 00:01:25.660 > git --version # timeout=10 00:01:25.670 > git --version # 'git version 2.39.2' 00:01:25.688 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:25.690 [Pipeline] { 00:01:25.700 [Pipeline] retry 00:01:25.702 [Pipeline] { 00:01:25.721 [Pipeline] sh 00:01:26.005 + git ls-remote http://dpdk.org/git/dpdk main 00:01:26.019 [Pipeline] } 00:01:26.042 [Pipeline] // retry 00:01:26.047 [Pipeline] } 00:01:26.068 [Pipeline] // withCredentials 00:01:26.080 [Pipeline] httpRequest 00:01:26.085 HttpMethod: GET 00:01:26.085 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:26.090 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:26.092 Response Code: HTTP/1.1 200 OK 00:01:26.093 Success: Status code 200 is in the accepted range: 200,404 00:01:26.094 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:33.815 [Pipeline] sh 00:01:34.097 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:35.488 [Pipeline] sh 00:01:35.772 + git -C dpdk log --oneline -n5 00:01:35.772 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:35.772 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:35.772 0efea35a2b app: move alignment attribute on types for MSVC 00:01:35.772 e2e546ab5b version: 24.07-rc0 00:01:35.772 a9778aad62 version: 24.03.0 00:01:35.783 [Pipeline] } 00:01:35.799 [Pipeline] // stage 00:01:35.807 [Pipeline] stage 00:01:35.810 [Pipeline] { (Prepare) 00:01:35.830 [Pipeline] writeFile 00:01:35.846 [Pipeline] sh 00:01:36.129 + logger -p user.info -t JENKINS-CI 00:01:36.142 [Pipeline] sh 00:01:36.424 + logger -p user.info -t JENKINS-CI 00:01:36.436 [Pipeline] sh 00:01:36.718 + cat autorun-spdk.conf 00:01:36.719 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.719 SPDK_RUN_UBSAN=1 00:01:36.719 SPDK_TEST_FUZZER=1 00:01:36.719 SPDK_TEST_FUZZER_SHORT=1 00:01:36.719 SPDK_TEST_NATIVE_DPDK=main 00:01:36.719 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:36.726 RUN_NIGHTLY=1 00:01:36.731 [Pipeline] readFile 00:01:36.755 [Pipeline] withEnv 00:01:36.757 [Pipeline] { 00:01:36.772 [Pipeline] sh 00:01:37.057 + set -ex 00:01:37.057 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:37.057 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:37.057 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.057 ++ SPDK_RUN_UBSAN=1 00:01:37.057 ++ SPDK_TEST_FUZZER=1 00:01:37.057 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:37.057 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:37.057 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:37.057 ++ RUN_NIGHTLY=1 00:01:37.057 + case $SPDK_TEST_NVMF_NICS in 00:01:37.057 + DRIVERS= 00:01:37.057 + [[ -n '' ]] 00:01:37.057 + exit 0 00:01:37.067 [Pipeline] } 00:01:37.085 [Pipeline] // withEnv 00:01:37.091 [Pipeline] } 00:01:37.109 [Pipeline] // stage 00:01:37.118 [Pipeline] catchError 00:01:37.120 [Pipeline] { 00:01:37.137 [Pipeline] timeout 00:01:37.137 Timeout set to expire in 30 min 00:01:37.139 [Pipeline] { 00:01:37.154 [Pipeline] stage 00:01:37.156 [Pipeline] { (Tests) 00:01:37.172 [Pipeline] sh 00:01:37.455 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:37.456 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:37.456 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:37.456 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:37.456 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:37.456 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:37.456 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:37.456 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:37.456 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:37.456 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:37.456 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:37.456 + source /etc/os-release 00:01:37.456 ++ NAME='Fedora Linux' 00:01:37.456 ++ VERSION='38 (Cloud Edition)' 00:01:37.456 ++ ID=fedora 00:01:37.456 ++ VERSION_ID=38 00:01:37.456 ++ VERSION_CODENAME= 00:01:37.456 ++ PLATFORM_ID=platform:f38 00:01:37.456 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:37.456 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.456 ++ LOGO=fedora-logo-icon 00:01:37.456 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:37.456 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.456 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:37.456 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.456 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.456 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.456 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:37.456 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.456 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:37.456 ++ SUPPORT_END=2024-05-14 00:01:37.456 ++ VARIANT='Cloud Edition' 00:01:37.456 ++ VARIANT_ID=cloud 00:01:37.456 + uname -a 00:01:37.456 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:37.456 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:40.750 Hugepages 00:01:40.750 node hugesize free / total 00:01:40.750 node0 1048576kB 0 / 0 00:01:40.750 node0 2048kB 0 / 0 00:01:40.750 node1 1048576kB 0 / 0 00:01:40.750 node1 2048kB 0 / 0 00:01:40.750 00:01:40.750 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.750 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:40.751 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:40.751 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:40.751 + rm -f /tmp/spdk-ld-path 00:01:40.751 + source autorun-spdk.conf 00:01:40.751 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.751 ++ SPDK_RUN_UBSAN=1 00:01:40.751 ++ SPDK_TEST_FUZZER=1 00:01:40.751 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:40.751 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:40.751 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:40.751 ++ RUN_NIGHTLY=1 00:01:40.751 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.751 + [[ -n '' ]] 00:01:40.751 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:40.751 + for M in /var/spdk/build-*-manifest.txt 00:01:40.751 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.751 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:40.751 + for M in /var/spdk/build-*-manifest.txt 00:01:40.751 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.751 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:40.751 ++ uname 00:01:40.751 + [[ Linux == \L\i\n\u\x ]] 00:01:40.751 + sudo dmesg -T 00:01:40.751 + sudo dmesg --clear 00:01:40.751 + dmesg_pid=3355727 00:01:40.751 + [[ Fedora Linux == FreeBSD ]] 00:01:40.751 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.751 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.751 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.751 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.751 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.751 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.751 + sudo dmesg -Tw 00:01:40.751 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.751 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.751 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.751 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.751 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.751 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.751 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.751 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.751 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:40.751 Test configuration: 00:01:40.751 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.751 SPDK_RUN_UBSAN=1 00:01:40.751 SPDK_TEST_FUZZER=1 00:01:40.751 SPDK_TEST_FUZZER_SHORT=1 00:01:40.751 SPDK_TEST_NATIVE_DPDK=main 00:01:40.751 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:40.751 RUN_NIGHTLY=1 02:42:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:40.751 02:42:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.751 02:42:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.751 02:42:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.751 02:42:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.751 02:42:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.751 02:42:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.751 02:42:31 -- paths/export.sh@5 -- $ export PATH 00:01:40.751 02:42:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.751 02:42:31 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:40.751 02:42:31 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:40.751 02:42:31 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715560951.XXXXXX 00:01:40.751 02:42:31 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715560951.KJkKXa 00:01:40.751 02:42:31 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:40.751 02:42:31 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:01:40.751 02:42:31 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:41.011 02:42:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:01:41.011 02:42:31 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:41.011 02:42:31 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.011 02:42:31 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:41.011 02:42:31 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:41.011 02:42:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.011 02:42:31 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:01:41.011 02:42:31 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:41.011 02:42:31 -- pm/common@17 -- $ local monitor 00:01:41.011 02:42:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.011 02:42:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.011 02:42:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.011 02:42:31 -- pm/common@21 -- $ date +%s 00:01:41.011 02:42:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.011 02:42:31 -- pm/common@21 -- $ date +%s 00:01:41.011 02:42:31 -- pm/common@21 -- $ date +%s 00:01:41.011 02:42:31 -- pm/common@25 -- $ sleep 1 00:01:41.011 02:42:31 -- pm/common@21 -- $ date +%s 00:01:41.011 02:42:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560951 00:01:41.011 02:42:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560951 00:01:41.011 02:42:31 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560951 00:01:41.011 02:42:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715560951 00:01:41.012 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560951_collect-cpu-temp.pm.log 00:01:41.012 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560951_collect-vmstat.pm.log 00:01:41.012 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560951_collect-cpu-load.pm.log 00:01:41.012 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715560951_collect-bmc-pm.bmc.pm.log 00:01:41.957 02:42:32 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:41.957 02:42:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.957 02:42:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.957 02:42:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:41.957 02:42:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.957 Mon May 13 12:42:32 AM UTC 2024 00:01:41.957 02:42:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.957 v24.05-pre-583-gdafdb289f 00:01:41.957 02:42:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.957 02:42:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.957 02:42:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.957 02:42:32 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:41.957 02:42:32 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:41.957 02:42:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.957 ************************************ 00:01:41.957 START TEST ubsan 00:01:41.957 ************************************ 00:01:41.957 02:42:32 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:41.957 using ubsan 00:01:41.957 00:01:41.957 real 0m0.000s 00:01:41.957 user 0m0.000s 00:01:41.957 sys 0m0.000s 00:01:41.957 02:42:32 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:41.957 02:42:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.957 ************************************ 00:01:41.957 END TEST ubsan 00:01:41.957 ************************************ 00:01:41.957 02:42:32 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:41.957 02:42:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:41.957 02:42:32 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:41.957 02:42:32 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:41.957 02:42:32 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:41.957 02:42:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.957 ************************************ 00:01:41.957 START TEST build_native_dpdk 00:01:41.957 ************************************ 00:01:41.957 02:42:32 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:41.957 02:42:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk log --oneline -n 5 00:01:42.216 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:42.216 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:42.216 0efea35a2b app: move alignment attribute on types for MSVC 00:01:42.216 e2e546ab5b version: 24.07-rc0 00:01:42.216 a9778aad62 version: 24.03.0 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:42.216 02:42:32 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:42.217 02:42:32 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:42.217 02:42:32 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:42.217 patching file config/rte_config.h 00:01:42.217 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:42.217 02:42:32 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:42.217 02:42:32 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:42.217 02:42:32 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:42.217 02:42:32 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:42.217 02:42:32 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.493 The Meson build system 00:01:47.493 Version: 1.3.1 00:01:47.493 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:47.493 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp 00:01:47.493 Build type: native build 00:01:47.493 Program cat found: YES (/usr/bin/cat) 00:01:47.493 Project name: DPDK 00:01:47.493 Project version: 24.07.0-rc0 00:01:47.493 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.493 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:47.493 Host machine cpu family: x86_64 00:01:47.493 Host machine cpu: x86_64 00:01:47.493 Message: ## Building in Developer Mode ## 00:01:47.493 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.493 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:47.493 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.493 Program python3 found: YES (/usr/bin/python3) 00:01:47.493 Program cat found: YES (/usr/bin/cat) 00:01:47.493 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:47.493 Compiler for C supports arguments -march=native: YES 00:01:47.493 Checking for size of "void *" : 8 00:01:47.493 Checking for size of "void *" : 8 (cached) 00:01:47.493 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:47.493 Library m found: YES 00:01:47.493 Library numa found: YES 00:01:47.493 Has header "numaif.h" : YES 00:01:47.493 Library fdt found: NO 00:01:47.493 Library execinfo found: NO 00:01:47.493 Has header "execinfo.h" : YES 00:01:47.493 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.493 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.493 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.493 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.493 Run-time dependency openssl found: YES 3.0.9 00:01:47.493 Run-time dependency libpcap found: YES 1.10.4 00:01:47.493 Has header "pcap.h" with dependency libpcap: YES 00:01:47.493 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.493 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.493 Compiler for C supports arguments -Wformat: YES 00:01:47.493 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.493 Compiler for C supports arguments -Wformat-security: NO 00:01:47.493 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.493 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.493 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.493 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.493 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.493 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.493 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.493 Compiler for C supports arguments -Wundef: YES 00:01:47.493 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.493 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.493 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.493 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.493 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.493 Program objdump found: YES (/usr/bin/objdump) 00:01:47.493 Compiler for C supports arguments -mavx512f: YES 00:01:47.493 Checking if "AVX512 checking" compiles: YES 00:01:47.493 Fetching value of define "__SSE4_2__" : 1 00:01:47.493 Fetching value of define "__AES__" : 1 00:01:47.493 Fetching value of define "__AVX__" : 1 00:01:47.493 Fetching value of define "__AVX2__" : 1 00:01:47.493 Fetching value of define "__AVX512BW__" : 1 00:01:47.493 Fetching value of define "__AVX512CD__" : 1 00:01:47.493 Fetching value of define "__AVX512DQ__" : 1 00:01:47.493 Fetching value of define "__AVX512F__" : 1 00:01:47.493 Fetching value of define "__AVX512VL__" : 1 00:01:47.493 Fetching value of define "__PCLMUL__" : 1 00:01:47.493 Fetching value of define "__RDRND__" : 1 00:01:47.493 Fetching value of define "__RDSEED__" : 1 00:01:47.493 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.493 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.493 Message: lib/log: Defining dependency "log" 00:01:47.493 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.493 Message: lib/argparse: Defining dependency "argparse" 00:01:47.493 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.493 Checking for function "getentropy" : NO 00:01:47.493 Message: lib/eal: Defining dependency "eal" 00:01:47.493 Message: lib/ring: Defining dependency "ring" 00:01:47.493 Message: lib/rcu: Defining dependency "rcu" 00:01:47.493 Message: lib/mempool: Defining dependency "mempool" 00:01:47.493 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.493 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.493 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.493 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.493 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.494 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.494 Compiler for C supports arguments -mpclmul: YES 00:01:47.494 Compiler for C supports arguments -maes: YES 00:01:47.494 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.494 Compiler for C supports arguments -mavx512bw: YES 00:01:47.494 Compiler for C supports arguments -mavx512dq: YES 00:01:47.494 Compiler for C supports arguments -mavx512vl: YES 00:01:47.494 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.494 Compiler for C supports arguments -mavx2: YES 00:01:47.494 Compiler for C supports arguments -mavx: YES 00:01:47.494 Message: lib/net: Defining dependency "net" 00:01:47.494 Message: lib/meter: Defining dependency "meter" 00:01:47.494 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.494 Message: lib/pci: Defining dependency "pci" 00:01:47.494 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.494 Message: lib/metrics: Defining dependency "metrics" 00:01:47.494 Message: lib/hash: Defining dependency "hash" 00:01:47.494 Message: lib/timer: Defining dependency "timer" 00:01:47.494 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.494 Message: lib/acl: Defining dependency "acl" 00:01:47.494 Message: lib/bbdev: Defining dependency "bbdev" 00:01:47.494 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:47.494 Run-time dependency libelf found: YES 0.190 00:01:47.494 Message: lib/bpf: Defining dependency "bpf" 00:01:47.494 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:47.494 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.494 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.494 Message: lib/distributor: Defining dependency "distributor" 00:01:47.494 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.494 Message: lib/efd: Defining dependency "efd" 00:01:47.494 Message: lib/eventdev: Defining dependency "eventdev" 00:01:47.494 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:47.494 Message: lib/gpudev: Defining dependency "gpudev" 00:01:47.494 Message: lib/gro: Defining dependency "gro" 00:01:47.494 Message: lib/gso: Defining dependency "gso" 00:01:47.494 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:47.494 Message: lib/jobstats: Defining dependency "jobstats" 00:01:47.494 Message: lib/latencystats: Defining dependency "latencystats" 00:01:47.494 Message: lib/lpm: Defining dependency "lpm" 00:01:47.494 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:47.494 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:47.494 Message: lib/member: Defining dependency "member" 00:01:47.494 Message: lib/pcapng: Defining dependency "pcapng" 00:01:47.494 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.494 Message: lib/power: Defining dependency "power" 00:01:47.494 Message: lib/rawdev: Defining dependency "rawdev" 00:01:47.494 Message: lib/regexdev: Defining dependency "regexdev" 00:01:47.494 Message: lib/mldev: Defining dependency "mldev" 00:01:47.494 Message: lib/rib: Defining dependency "rib" 00:01:47.494 Message: lib/reorder: Defining dependency "reorder" 00:01:47.494 Message: lib/sched: Defining dependency "sched" 00:01:47.494 Message: lib/security: Defining dependency "security" 00:01:47.494 Message: lib/stack: Defining dependency "stack" 00:01:47.494 Has header "linux/userfaultfd.h" : YES 00:01:47.494 Has header "linux/vduse.h" : YES 00:01:47.494 Message: lib/vhost: Defining dependency "vhost" 00:01:47.494 Message: lib/ipsec: Defining dependency "ipsec" 00:01:47.494 Message: lib/pdcp: Defining dependency "pdcp" 00:01:47.494 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.494 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.494 Message: lib/fib: Defining dependency "fib" 00:01:47.494 Message: lib/port: Defining dependency "port" 00:01:47.494 Message: lib/pdump: Defining dependency "pdump" 00:01:47.494 Message: lib/table: Defining dependency "table" 00:01:47.494 Message: lib/pipeline: Defining dependency "pipeline" 00:01:47.494 Message: lib/graph: Defining dependency "graph" 00:01:47.494 Message: lib/node: Defining dependency "node" 00:01:47.494 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.494 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.494 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.064 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.064 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:48.064 Compiler for C supports arguments -Wno-unused-value: YES 00:01:48.064 Compiler for C supports arguments -Wno-format: YES 00:01:48.064 Compiler for C supports arguments -Wno-format-security: YES 00:01:48.064 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:48.064 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:48.064 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:48.064 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:48.064 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.064 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.064 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.064 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:48.064 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:48.064 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:48.064 Has header "sys/epoll.h" : YES 00:01:48.064 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.064 Configuring doxy-api-html.conf using configuration 00:01:48.064 Configuring doxy-api-man.conf using configuration 00:01:48.064 Program mandb found: YES (/usr/bin/mandb) 00:01:48.064 Program sphinx-build found: NO 00:01:48.064 Configuring rte_build_config.h using configuration 00:01:48.064 Message: 00:01:48.064 ================= 00:01:48.064 Applications Enabled 00:01:48.064 ================= 00:01:48.064 00:01:48.064 apps: 00:01:48.064 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:48.064 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:48.064 test-pmd, test-regex, test-sad, test-security-perf, 00:01:48.064 00:01:48.064 Message: 00:01:48.064 ================= 00:01:48.064 Libraries Enabled 00:01:48.064 ================= 00:01:48.064 00:01:48.064 libs: 00:01:48.064 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:01:48.064 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:01:48.064 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:01:48.064 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:01:48.064 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:01:48.064 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:01:48.064 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:01:48.064 node, 00:01:48.064 00:01:48.064 Message: 00:01:48.064 =============== 00:01:48.064 Drivers Enabled 00:01:48.064 =============== 00:01:48.064 00:01:48.064 common: 00:01:48.064 00:01:48.064 bus: 00:01:48.064 pci, vdev, 00:01:48.064 mempool: 00:01:48.064 ring, 00:01:48.064 dma: 00:01:48.064 00:01:48.064 net: 00:01:48.064 i40e, 00:01:48.064 raw: 00:01:48.064 00:01:48.064 crypto: 00:01:48.064 00:01:48.064 compress: 00:01:48.064 00:01:48.064 regex: 00:01:48.064 00:01:48.064 ml: 00:01:48.064 00:01:48.064 vdpa: 00:01:48.064 00:01:48.064 event: 00:01:48.064 00:01:48.064 baseband: 00:01:48.064 00:01:48.064 gpu: 00:01:48.064 00:01:48.064 00:01:48.064 Message: 00:01:48.064 ================= 00:01:48.064 Content Skipped 00:01:48.064 ================= 00:01:48.064 00:01:48.064 apps: 00:01:48.064 00:01:48.064 libs: 00:01:48.064 00:01:48.064 drivers: 00:01:48.064 common/cpt: not in enabled drivers build config 00:01:48.064 common/dpaax: not in enabled drivers build config 00:01:48.064 common/iavf: not in enabled drivers build config 00:01:48.064 common/idpf: not in enabled drivers build config 00:01:48.064 common/ionic: not in enabled drivers build config 00:01:48.064 common/mvep: not in enabled drivers build config 00:01:48.064 common/octeontx: not in enabled drivers build config 00:01:48.064 bus/auxiliary: not in enabled drivers build config 00:01:48.064 bus/cdx: not in enabled drivers build config 00:01:48.064 bus/dpaa: not in enabled drivers build config 00:01:48.064 bus/fslmc: not in enabled drivers build config 00:01:48.064 bus/ifpga: not in enabled drivers build config 00:01:48.064 bus/platform: not in enabled drivers build config 00:01:48.064 bus/uacce: not in enabled drivers build config 00:01:48.064 bus/vmbus: not in enabled drivers build config 00:01:48.064 common/cnxk: not in enabled drivers build config 00:01:48.064 common/mlx5: not in enabled drivers build config 00:01:48.064 common/nfp: not in enabled drivers build config 00:01:48.064 common/nitrox: not in enabled drivers build config 00:01:48.064 common/qat: not in enabled drivers build config 00:01:48.064 common/sfc_efx: not in enabled drivers build config 00:01:48.064 mempool/bucket: not in enabled drivers build config 00:01:48.064 mempool/cnxk: not in enabled drivers build config 00:01:48.064 mempool/dpaa: not in enabled drivers build config 00:01:48.064 mempool/dpaa2: not in enabled drivers build config 00:01:48.064 mempool/octeontx: not in enabled drivers build config 00:01:48.064 mempool/stack: not in enabled drivers build config 00:01:48.064 dma/cnxk: not in enabled drivers build config 00:01:48.064 dma/dpaa: not in enabled drivers build config 00:01:48.064 dma/dpaa2: not in enabled drivers build config 00:01:48.064 dma/hisilicon: not in enabled drivers build config 00:01:48.064 dma/idxd: not in enabled drivers build config 00:01:48.064 dma/ioat: not in enabled drivers build config 00:01:48.064 dma/skeleton: not in enabled drivers build config 00:01:48.064 net/af_packet: not in enabled drivers build config 00:01:48.064 net/af_xdp: not in enabled drivers build config 00:01:48.064 net/ark: not in enabled drivers build config 00:01:48.064 net/atlantic: not in enabled drivers build config 00:01:48.064 net/avp: not in enabled drivers build config 00:01:48.064 net/axgbe: not in enabled drivers build config 00:01:48.064 net/bnx2x: not in enabled drivers build config 00:01:48.064 net/bnxt: not in enabled drivers build config 00:01:48.064 net/bonding: not in enabled drivers build config 00:01:48.064 net/cnxk: not in enabled drivers build config 00:01:48.064 net/cpfl: not in enabled drivers build config 00:01:48.064 net/cxgbe: not in enabled drivers build config 00:01:48.064 net/dpaa: not in enabled drivers build config 00:01:48.064 net/dpaa2: not in enabled drivers build config 00:01:48.064 net/e1000: not in enabled drivers build config 00:01:48.064 net/ena: not in enabled drivers build config 00:01:48.064 net/enetc: not in enabled drivers build config 00:01:48.064 net/enetfec: not in enabled drivers build config 00:01:48.064 net/enic: not in enabled drivers build config 00:01:48.064 net/failsafe: not in enabled drivers build config 00:01:48.064 net/fm10k: not in enabled drivers build config 00:01:48.064 net/gve: not in enabled drivers build config 00:01:48.064 net/hinic: not in enabled drivers build config 00:01:48.064 net/hns3: not in enabled drivers build config 00:01:48.064 net/iavf: not in enabled drivers build config 00:01:48.064 net/ice: not in enabled drivers build config 00:01:48.064 net/idpf: not in enabled drivers build config 00:01:48.064 net/igc: not in enabled drivers build config 00:01:48.064 net/ionic: not in enabled drivers build config 00:01:48.064 net/ipn3ke: not in enabled drivers build config 00:01:48.064 net/ixgbe: not in enabled drivers build config 00:01:48.064 net/mana: not in enabled drivers build config 00:01:48.064 net/memif: not in enabled drivers build config 00:01:48.064 net/mlx4: not in enabled drivers build config 00:01:48.064 net/mlx5: not in enabled drivers build config 00:01:48.064 net/mvneta: not in enabled drivers build config 00:01:48.064 net/mvpp2: not in enabled drivers build config 00:01:48.064 net/netvsc: not in enabled drivers build config 00:01:48.064 net/nfb: not in enabled drivers build config 00:01:48.064 net/nfp: not in enabled drivers build config 00:01:48.064 net/ngbe: not in enabled drivers build config 00:01:48.064 net/null: not in enabled drivers build config 00:01:48.064 net/octeontx: not in enabled drivers build config 00:01:48.064 net/octeon_ep: not in enabled drivers build config 00:01:48.064 net/pcap: not in enabled drivers build config 00:01:48.064 net/pfe: not in enabled drivers build config 00:01:48.064 net/qede: not in enabled drivers build config 00:01:48.064 net/ring: not in enabled drivers build config 00:01:48.064 net/sfc: not in enabled drivers build config 00:01:48.064 net/softnic: not in enabled drivers build config 00:01:48.064 net/tap: not in enabled drivers build config 00:01:48.064 net/thunderx: not in enabled drivers build config 00:01:48.064 net/txgbe: not in enabled drivers build config 00:01:48.064 net/vdev_netvsc: not in enabled drivers build config 00:01:48.064 net/vhost: not in enabled drivers build config 00:01:48.064 net/virtio: not in enabled drivers build config 00:01:48.064 net/vmxnet3: not in enabled drivers build config 00:01:48.064 raw/cnxk_bphy: not in enabled drivers build config 00:01:48.064 raw/cnxk_gpio: not in enabled drivers build config 00:01:48.064 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:48.064 raw/ifpga: not in enabled drivers build config 00:01:48.064 raw/ntb: not in enabled drivers build config 00:01:48.064 raw/skeleton: not in enabled drivers build config 00:01:48.064 crypto/armv8: not in enabled drivers build config 00:01:48.064 crypto/bcmfs: not in enabled drivers build config 00:01:48.064 crypto/caam_jr: not in enabled drivers build config 00:01:48.064 crypto/ccp: not in enabled drivers build config 00:01:48.064 crypto/cnxk: not in enabled drivers build config 00:01:48.064 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.064 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.064 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.064 crypto/mlx5: not in enabled drivers build config 00:01:48.065 crypto/mvsam: not in enabled drivers build config 00:01:48.065 crypto/nitrox: not in enabled drivers build config 00:01:48.065 crypto/null: not in enabled drivers build config 00:01:48.065 crypto/octeontx: not in enabled drivers build config 00:01:48.065 crypto/openssl: not in enabled drivers build config 00:01:48.065 crypto/scheduler: not in enabled drivers build config 00:01:48.065 crypto/uadk: not in enabled drivers build config 00:01:48.065 crypto/virtio: not in enabled drivers build config 00:01:48.065 compress/isal: not in enabled drivers build config 00:01:48.065 compress/mlx5: not in enabled drivers build config 00:01:48.065 compress/nitrox: not in enabled drivers build config 00:01:48.065 compress/octeontx: not in enabled drivers build config 00:01:48.065 compress/zlib: not in enabled drivers build config 00:01:48.065 regex/mlx5: not in enabled drivers build config 00:01:48.065 regex/cn9k: not in enabled drivers build config 00:01:48.065 ml/cnxk: not in enabled drivers build config 00:01:48.065 vdpa/ifc: not in enabled drivers build config 00:01:48.065 vdpa/mlx5: not in enabled drivers build config 00:01:48.065 vdpa/nfp: not in enabled drivers build config 00:01:48.065 vdpa/sfc: not in enabled drivers build config 00:01:48.065 event/cnxk: not in enabled drivers build config 00:01:48.065 event/dlb2: not in enabled drivers build config 00:01:48.065 event/dpaa: not in enabled drivers build config 00:01:48.065 event/dpaa2: not in enabled drivers build config 00:01:48.065 event/dsw: not in enabled drivers build config 00:01:48.065 event/opdl: not in enabled drivers build config 00:01:48.065 event/skeleton: not in enabled drivers build config 00:01:48.065 event/sw: not in enabled drivers build config 00:01:48.065 event/octeontx: not in enabled drivers build config 00:01:48.065 baseband/acc: not in enabled drivers build config 00:01:48.065 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:48.065 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:48.065 baseband/la12xx: not in enabled drivers build config 00:01:48.065 baseband/null: not in enabled drivers build config 00:01:48.065 baseband/turbo_sw: not in enabled drivers build config 00:01:48.065 gpu/cuda: not in enabled drivers build config 00:01:48.065 00:01:48.065 00:01:48.065 Build targets in project: 221 00:01:48.065 00:01:48.065 DPDK 24.07.0-rc0 00:01:48.065 00:01:48.065 User defined options 00:01:48.065 libdir : lib 00:01:48.065 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:48.065 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:48.065 c_link_args : 00:01:48.065 enable_docs : false 00:01:48.065 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.065 enable_kmods : false 00:01:48.065 machine : native 00:01:48.065 tests : false 00:01:48.065 00:01:48.065 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.065 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:48.065 02:42:38 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 00:01:48.065 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:01:48.334 [1/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.334 [2/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.334 [3/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.334 [4/719] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.334 [5/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.334 [6/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.334 [7/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.334 [8/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.334 [9/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.598 [10/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.598 [11/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.598 [12/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.598 [13/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.598 [14/719] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.598 [15/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.598 [16/719] Linking static target lib/librte_kvargs.a 00:01:48.598 [17/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.598 [18/719] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.598 [19/719] Linking static target lib/librte_pci.a 00:01:48.598 [20/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.598 [21/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.598 [22/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.598 [23/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.598 [24/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.598 [25/719] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.598 [26/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.598 [27/719] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.598 [28/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.598 [29/719] Linking static target lib/librte_log.a 00:01:48.598 [30/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.860 [31/719] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:48.860 [32/719] Linking static target lib/librte_argparse.a 00:01:48.860 [33/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.860 [34/719] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.860 [35/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.860 [36/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.860 [37/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.860 [38/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.860 [39/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.860 [40/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.860 [41/719] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.860 [42/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.860 [43/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.860 [44/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.860 [45/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.860 [46/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.860 [47/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.123 [48/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.123 [49/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.123 [50/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.123 [51/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.124 [52/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.124 [53/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.124 [54/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.124 [55/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.124 [56/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:49.124 [57/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.124 [58/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.124 [59/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:49.124 [60/719] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.124 [61/719] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.124 [62/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.124 [63/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.124 [64/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.124 [65/719] Linking static target lib/librte_meter.a 00:01:49.124 [66/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.124 [67/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.124 [68/719] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.124 [69/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.124 [70/719] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.124 [71/719] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.124 [72/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.124 [73/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.124 [74/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.124 [75/719] Linking static target lib/librte_cmdline.a 00:01:49.124 [76/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.124 [77/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:49.124 [78/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.124 [79/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.124 [80/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.124 [81/719] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.124 [82/719] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.124 [83/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.124 [84/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.124 [85/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:49.124 [86/719] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.124 [87/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.124 [88/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.124 [89/719] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.124 [90/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.124 [91/719] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.124 [92/719] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:49.124 [93/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.124 [94/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:49.124 [95/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.124 [96/719] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.124 [97/719] Linking static target lib/librte_ring.a 00:01:49.124 [98/719] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:49.124 [99/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.124 [100/719] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:49.124 [101/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.124 [102/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.124 [103/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.124 [104/719] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.124 [105/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.124 [106/719] Linking static target lib/librte_metrics.a 00:01:49.124 [107/719] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.124 [108/719] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.382 [109/719] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.382 [110/719] Linking static target lib/librte_net.a 00:01:49.382 [111/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.382 [112/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.382 [113/719] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:49.382 [114/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.382 [115/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.382 [116/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.382 [117/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:49.382 [118/719] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:49.382 [119/719] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.382 [120/719] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.382 [121/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.382 [122/719] Linking static target lib/librte_cfgfile.a 00:01:49.382 [123/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.382 [124/719] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.382 [125/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.382 [126/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.382 [127/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:49.382 [128/719] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.382 [129/719] Linking target lib/librte_log.so.24.2 00:01:49.382 [130/719] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.382 [131/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.382 [132/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.643 [133/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:49.643 [134/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.643 [135/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:49.643 [136/719] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:49.643 [137/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:49.643 [138/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:49.643 [139/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:49.643 [140/719] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:49.643 [141/719] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.643 [142/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.643 [143/719] Linking static target lib/librte_bitratestats.a 00:01:49.643 [144/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:49.643 [145/719] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.643 [146/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.643 [147/719] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.643 [148/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:49.643 [149/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:49.643 [150/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.643 [151/719] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:49.643 [152/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.643 [153/719] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.643 [154/719] Linking static target lib/librte_mempool.a 00:01:49.643 [155/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:49.643 [156/719] Linking static target lib/librte_timer.a 00:01:49.643 [157/719] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:49.643 [158/719] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:49.643 [159/719] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.643 [160/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.643 [161/719] Linking target lib/librte_kvargs.so.24.2 00:01:49.906 [162/719] Linking target lib/librte_argparse.so.24.2 00:01:49.906 [163/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.906 [164/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.906 [165/719] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:49.906 [166/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:49.906 [167/719] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:49.906 [168/719] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:49.907 [169/719] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:49.907 [170/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:49.907 [171/719] Linking static target lib/librte_jobstats.a 00:01:49.907 [172/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:49.907 [173/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.907 [174/719] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.907 [175/719] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:49.907 [176/719] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:49.907 [177/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:49.907 [178/719] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.907 [179/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.907 [180/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:49.907 [181/719] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:49.907 [182/719] Linking static target lib/librte_compressdev.a 00:01:49.907 [183/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:49.907 [184/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:49.907 [185/719] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:49.907 [186/719] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.907 [187/719] Linking static target lib/librte_bbdev.a 00:01:49.907 [188/719] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.907 [189/719] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:49.907 [190/719] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.907 [191/719] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:49.907 [192/719] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:49.907 [193/719] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.907 [194/719] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:49.907 [195/719] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:49.907 [196/719] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:49.907 [197/719] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:49.907 [198/719] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:50.167 [199/719] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:50.167 [200/719] Linking static target lib/librte_latencystats.a 00:01:50.167 [201/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.167 [202/719] Linking static target lib/librte_dispatcher.a 00:01:50.167 [203/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:50.167 [204/719] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.167 [205/719] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:50.167 [206/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.167 [207/719] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:50.167 [208/719] Linking static target lib/librte_telemetry.a 00:01:50.167 [209/719] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.167 [210/719] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:50.167 [211/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:50.167 [212/719] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.167 [213/719] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:50.167 [214/719] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.167 [215/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.167 [216/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:50.167 [217/719] Linking static target lib/librte_rcu.a 00:01:50.167 [218/719] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:50.167 [219/719] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:50.168 [220/719] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:50.168 [221/719] Linking static target lib/librte_gpudev.a 00:01:50.168 [222/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:50.168 [223/719] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.168 [224/719] Linking static target lib/librte_gro.a 00:01:50.168 [225/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:50.168 [226/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:50.168 [227/719] Linking static target lib/librte_eal.a 00:01:50.168 [228/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:50.168 [229/719] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:50.168 [230/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:50.168 [231/719] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:50.168 [232/719] Linking static target lib/librte_stack.a 00:01:50.168 [233/719] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.168 [234/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:50.168 [235/719] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:50.168 [236/719] Linking static target lib/librte_dmadev.a 00:01:50.168 [237/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:50.168 [238/719] Linking static target lib/librte_gso.a 00:01:50.168 [239/719] Linking static target lib/librte_distributor.a 00:01:50.168 [240/719] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.168 [241/719] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:50.168 [242/719] Linking static target lib/librte_regexdev.a 00:01:50.432 [243/719] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:50.432 [244/719] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:50.432 [245/719] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:50.432 [246/719] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.432 [247/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:50.432 [248/719] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.432 [249/719] Linking static target lib/librte_ip_frag.a 00:01:50.432 [250/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.432 [251/719] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:50.432 [252/719] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.432 [253/719] Linking static target lib/librte_mbuf.a 00:01:50.432 [254/719] Linking static target lib/librte_pcapng.a 00:01:50.432 [255/719] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:50.432 [256/719] Linking static target lib/librte_power.a 00:01:50.432 [257/719] Linking static target lib/librte_rawdev.a 00:01:50.432 [258/719] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.432 [259/719] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:50.432 [260/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:50.432 [261/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:50.432 [262/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:50.432 [263/719] Linking static target lib/librte_mldev.a 00:01:50.432 [264/719] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.432 [265/719] Linking static target lib/librte_reorder.a 00:01:50.432 [266/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:50.432 [267/719] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:50.432 [268/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:50.432 [269/719] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.432 [270/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:50.432 [271/719] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:50.432 [272/719] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.432 [273/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:50.432 [274/719] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.432 [275/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:50.432 [276/719] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.695 [277/719] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.695 [278/719] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.695 [279/719] Linking static target lib/librte_security.a 00:01:50.695 [280/719] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.695 [281/719] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:50.695 [282/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.695 [283/719] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.695 [284/719] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.695 [285/719] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:50.695 [286/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:50.695 [287/719] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:50.695 [288/719] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:50.695 [289/719] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.695 [290/719] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.695 [291/719] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.695 [292/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:50.695 [293/719] Linking static target lib/librte_bpf.a 00:01:50.695 [294/719] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:50.695 [295/719] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:50.695 [296/719] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:50.695 [297/719] Linking static target lib/librte_lpm.a 00:01:50.695 [298/719] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.695 [299/719] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [300/719] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.959 [301/719] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:50.959 [302/719] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [303/719] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [304/719] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:50.959 [305/719] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:50.959 [306/719] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:50.959 [307/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:50.959 [308/719] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:50.959 [309/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:50.959 [310/719] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [311/719] Linking static target lib/librte_rib.a 00:01:50.959 [312/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:50.959 [313/719] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:50.959 [314/719] Linking target lib/librte_telemetry.so.24.2 00:01:50.959 [315/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:50.959 [316/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:50.959 [317/719] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:50.959 [318/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:50.959 [319/719] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:50.959 [320/719] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [321/719] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [322/719] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:50.959 [323/719] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:50.959 [324/719] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.959 [325/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:50.959 [326/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:51.219 [327/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:51.219 [328/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:51.219 [329/719] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:51.219 [330/719] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:51.219 [331/719] Linking static target lib/librte_efd.a 00:01:51.219 [332/719] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:51.219 [333/719] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:51.219 [334/719] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:51.219 [335/719] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:51.219 [336/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:51.219 [337/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:51.219 [338/719] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:51.219 [339/719] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:51.219 [340/719] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.219 [341/719] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:51.219 [342/719] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:51.219 [343/719] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.219 [344/719] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.219 [345/719] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:51.219 [346/719] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.219 [347/719] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:51.219 [348/719] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:51.219 [349/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:51.219 [350/719] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:51.219 [351/719] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:51.219 [352/719] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:51.219 [353/719] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:51.219 [354/719] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:51.483 [355/719] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:51.483 [356/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.483 [357/719] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:51.483 [358/719] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.483 [359/719] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:51.484 [360/719] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:51.484 [361/719] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:51.484 [362/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.484 [363/719] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:51.484 [364/719] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:51.484 [365/719] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:51.484 [366/719] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.484 [367/719] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.484 [368/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:51.484 [369/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:51.484 [370/719] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:51.484 [371/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:51.484 [372/719] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.484 [373/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:51.484 [374/719] Linking static target lib/librte_fib.a 00:01:51.484 [375/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:51.484 [376/719] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:51.748 [377/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:51.748 [378/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.748 [379/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:51.748 [380/719] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:51.748 [381/719] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.748 [382/719] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.748 [383/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:51.748 [384/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:51.748 [385/719] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:51.748 [386/719] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.748 [387/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:51.748 [388/719] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.748 [389/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.748 [390/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:51.748 [391/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:51.748 [392/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:51.748 [393/719] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:51.748 [394/719] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:51.748 [395/719] Linking static target lib/librte_pdump.a 00:01:51.748 [396/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:51.748 [397/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:51.748 [398/719] Linking static target lib/librte_graph.a 00:01:51.748 [399/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:52.011 [400/719] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:52.011 [401/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.011 [402/719] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.011 [403/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:52.011 [404/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:52.011 [405/719] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:52.011 [406/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:52.011 [407/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:52.011 [408/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:52.011 [409/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:52.011 [410/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:52.011 [411/719] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:52.011 [412/719] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.011 [413/719] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:52.011 [414/719] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:52.011 [415/719] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:52.011 [416/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:52.011 [417/719] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.011 [418/719] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:52.011 [419/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:52.011 [420/719] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.011 [421/719] Linking static target drivers/librte_bus_vdev.a 00:01:52.011 [422/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:52.273 [423/719] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:52.273 [424/719] Linking static target lib/librte_table.a 00:01:52.273 [425/719] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.273 [426/719] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:52.273 [427/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:52.273 [428/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:52.273 [429/719] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:52.273 [430/719] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:52.273 [431/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:52.273 [432/719] Linking static target lib/librte_sched.a 00:01:52.273 [433/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:52.273 [434/719] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:52.273 [435/719] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:52.273 [436/719] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:52.273 [437/719] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:52.273 [438/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:52.273 [439/719] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.273 [440/719] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:52.273 [441/719] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.273 [442/719] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:52.273 [443/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.273 [444/719] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:52.273 [445/719] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:52.273 [446/719] Linking static target lib/librte_cryptodev.a 00:01:52.273 [447/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:52.273 [448/719] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:52.273 [449/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:52.273 [450/719] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.273 [451/719] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.273 [452/719] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:52.273 [453/719] Linking static target drivers/librte_bus_pci.a 00:01:52.532 [454/719] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:52.532 [455/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:52.532 [456/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:52.532 [457/719] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.532 [458/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.532 [459/719] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:52.532 [460/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:52.532 [461/719] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.532 [462/719] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:52.532 [463/719] Linking static target lib/librte_member.a 00:01:52.532 [464/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:52.532 [465/719] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:52.532 [466/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:52.532 [467/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:52.532 [468/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:52.532 [469/719] Linking static target lib/librte_ipsec.a 00:01:52.532 [470/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:52.532 [471/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:52.532 [472/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:52.532 [473/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:52.532 [474/719] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.532 [475/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:52.532 [476/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:52.532 [477/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:52.792 [478/719] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:52.792 [479/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:52.792 [480/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:52.792 [481/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:52.792 [482/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:52.792 [483/719] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:52.792 [484/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:52.792 [485/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:52.792 [486/719] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:52.792 [487/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:52.792 [488/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:52.792 [489/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:52.792 [490/719] Linking static target lib/librte_pdcp.a 00:01:52.792 [491/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:52.792 [492/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:52.792 [493/719] Linking static target lib/librte_node.a 00:01:52.792 [494/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:52.792 [495/719] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.792 [496/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:52.792 [497/719] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.792 [498/719] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.792 [499/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:52.792 [500/719] Linking static target drivers/librte_mempool_ring.a 00:01:52.792 [501/719] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.792 [502/719] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.792 [503/719] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.792 [504/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:52.792 [505/719] Linking static target lib/librte_hash.a 00:01:52.792 [506/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:52.792 [507/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:52.792 [508/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:52.792 [509/719] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.792 [510/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:52.792 [511/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:53.050 [512/719] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:53.050 [513/719] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.050 [514/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:53.050 [515/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:53.050 [516/719] Linking static target lib/librte_port.a 00:01:53.050 [517/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:53.050 [518/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:53.050 [519/719] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:53.050 [520/719] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:53.050 [521/719] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.050 [522/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:53.050 [523/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:53.050 [524/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:53.050 [525/719] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:53.050 [526/719] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.050 [527/719] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:53.050 [528/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:53.050 [529/719] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:53.050 [530/719] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:53.050 [531/719] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:53.050 [532/719] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:53.050 [533/719] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:53.050 [534/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:53.050 [535/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:53.050 [536/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:53.050 [537/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:53.050 [538/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:53.050 [539/719] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.050 [540/719] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:53.050 [541/719] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.050 [542/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:53.050 [543/719] Linking static target lib/acl/libavx2_tmp.a 00:01:53.311 [544/719] Linking static target lib/librte_eventdev.a 00:01:53.311 [545/719] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.311 [546/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:53.311 [547/719] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:53.311 [548/719] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:53.311 [549/719] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:53.311 [550/719] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:53.311 [551/719] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:53.311 [552/719] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:53.311 [553/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.311 [554/719] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:53.311 [555/719] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:53.311 [556/719] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:53.311 [557/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:53.311 [558/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:53.311 [559/719] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:53.311 [560/719] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:53.311 [561/719] Linking static target lib/librte_acl.a 00:01:53.311 [562/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:53.311 [563/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:53.311 [564/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:53.311 [565/719] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:53.311 [566/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.638 [567/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:53.638 [568/719] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:53.638 [569/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:53.638 [570/719] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:53.638 [571/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:53.638 [572/719] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:53.639 [573/719] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:53.639 [574/719] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:53.639 [575/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.639 [576/719] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:53.639 [577/719] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:53.639 [578/719] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.639 [579/719] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.897 [580/719] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.897 [581/719] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:53.897 [582/719] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:53.897 [583/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:54.156 [584/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.156 [585/719] Linking static target lib/librte_ethdev.a 00:01:54.156 [586/719] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:54.156 [587/719] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.414 [588/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:54.672 [589/719] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:54.672 [590/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:54.931 [591/719] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:54.931 [592/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:55.498 [593/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:55.498 [594/719] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:55.498 [595/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:55.756 [596/719] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:55.756 [597/719] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.756 [598/719] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.756 [599/719] Linking static target drivers/librte_net_i40e.a 00:01:56.322 [600/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.890 [601/719] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.890 [602/719] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.148 [603/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.148 [604/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:02.420 [605/719] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.420 [606/719] Linking target lib/librte_eal.so.24.2 00:02:02.420 [607/719] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:02.420 [608/719] Linking target lib/librte_cfgfile.so.24.2 00:02:02.420 [609/719] Linking target lib/librte_pci.so.24.2 00:02:02.420 [610/719] Linking target lib/librte_ring.so.24.2 00:02:02.420 [611/719] Linking target lib/librte_jobstats.so.24.2 00:02:02.420 [612/719] Linking target lib/librte_meter.so.24.2 00:02:02.420 [613/719] Linking target lib/librte_stack.so.24.2 00:02:02.420 [614/719] Linking target lib/librte_timer.so.24.2 00:02:02.420 [615/719] Linking target lib/librte_rawdev.so.24.2 00:02:02.420 [616/719] Linking target lib/librte_dmadev.so.24.2 00:02:02.420 [617/719] Linking target drivers/librte_bus_vdev.so.24.2 00:02:02.420 [618/719] Linking target lib/librte_acl.so.24.2 00:02:02.420 [619/719] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:02.420 [620/719] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:02.420 [621/719] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:02.420 [622/719] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:02.421 [623/719] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:02.421 [624/719] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:02.421 [625/719] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:02:02.421 [626/719] Linking target drivers/librte_bus_pci.so.24.2 00:02:02.421 [627/719] Linking target lib/librte_rcu.so.24.2 00:02:02.421 [628/719] Linking target lib/librte_mempool.so.24.2 00:02:02.680 [629/719] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:02:02.680 [630/719] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:02.680 [631/719] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:02.680 [632/719] Linking target drivers/librte_mempool_ring.so.24.2 00:02:02.680 [633/719] Linking target lib/librte_rib.so.24.2 00:02:02.680 [634/719] Linking target lib/librte_mbuf.so.24.2 00:02:02.680 [635/719] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:02:03.030 [636/719] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:03.030 [637/719] Linking target lib/librte_fib.so.24.2 00:02:03.030 [638/719] Linking target lib/librte_gpudev.so.24.2 00:02:03.030 [639/719] Linking target lib/librte_bbdev.so.24.2 00:02:03.030 [640/719] Linking target lib/librte_mldev.so.24.2 00:02:03.030 [641/719] Linking target lib/librte_net.so.24.2 00:02:03.030 [642/719] Linking target lib/librte_compressdev.so.24.2 00:02:03.030 [643/719] Linking target lib/librte_regexdev.so.24.2 00:02:03.030 [644/719] Linking target lib/librte_distributor.so.24.2 00:02:03.030 [645/719] Linking target lib/librte_sched.so.24.2 00:02:03.030 [646/719] Linking target lib/librte_reorder.so.24.2 00:02:03.030 [647/719] Linking target lib/librte_cryptodev.so.24.2 00:02:03.030 [648/719] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.030 [649/719] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:02:03.030 [650/719] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:02:03.030 [651/719] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:03.031 [652/719] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:03.031 [653/719] Linking target lib/librte_hash.so.24.2 00:02:03.031 [654/719] Linking target lib/librte_security.so.24.2 00:02:03.031 [655/719] Linking target lib/librte_cmdline.so.24.2 00:02:03.031 [656/719] Linking target lib/librte_ethdev.so.24.2 00:02:03.289 [657/719] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:03.289 [658/719] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:02:03.289 [659/719] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:03.289 [660/719] Linking target lib/librte_efd.so.24.2 00:02:03.289 [661/719] Linking target lib/librte_lpm.so.24.2 00:02:03.289 [662/719] Linking target lib/librte_member.so.24.2 00:02:03.289 [663/719] Linking target lib/librte_pdcp.so.24.2 00:02:03.290 [664/719] Linking target lib/librte_ipsec.so.24.2 00:02:03.290 [665/719] Linking target lib/librte_metrics.so.24.2 00:02:03.290 [666/719] Linking target lib/librte_gso.so.24.2 00:02:03.290 [667/719] Linking target lib/librte_bpf.so.24.2 00:02:03.290 [668/719] Linking target lib/librte_gro.so.24.2 00:02:03.290 [669/719] Linking target lib/librte_ip_frag.so.24.2 00:02:03.290 [670/719] Linking target lib/librte_power.so.24.2 00:02:03.290 [671/719] Linking target lib/librte_pcapng.so.24.2 00:02:03.290 [672/719] Linking target lib/librte_eventdev.so.24.2 00:02:03.290 [673/719] Linking target drivers/librte_net_i40e.so.24.2 00:02:03.290 [674/719] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:02:03.290 [675/719] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:02:03.290 [676/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:03.290 [677/719] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:03.548 [678/719] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:03.548 [679/719] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:03.548 [680/719] Linking static target lib/librte_pipeline.a 00:02:03.548 [681/719] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:02:03.548 [682/719] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:03.548 [683/719] Linking target lib/librte_latencystats.so.24.2 00:02:03.548 [684/719] Linking target lib/librte_bitratestats.so.24.2 00:02:03.548 [685/719] Linking target lib/librte_pdump.so.24.2 00:02:03.548 [686/719] Linking target lib/librte_graph.so.24.2 00:02:03.548 [687/719] Linking target lib/librte_dispatcher.so.24.2 00:02:03.548 [688/719] Linking target lib/librte_port.so.24.2 00:02:03.548 [689/719] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:02:03.548 [690/719] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:02:03.807 [691/719] Linking target lib/librte_node.so.24.2 00:02:03.807 [692/719] Linking target lib/librte_table.so.24.2 00:02:03.807 [693/719] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:02:04.067 [694/719] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.067 [695/719] Linking static target lib/librte_vhost.a 00:02:04.635 [696/719] Linking target app/dpdk-proc-info 00:02:04.635 [697/719] Linking target app/dpdk-test-bbdev 00:02:04.635 [698/719] Linking target app/dpdk-pdump 00:02:04.635 [699/719] Linking target app/dpdk-test-crypto-perf 00:02:04.635 [700/719] Linking target app/dpdk-test-compress-perf 00:02:04.635 [701/719] Linking target app/dpdk-graph 00:02:04.635 [702/719] Linking target app/dpdk-test-acl 00:02:04.635 [703/719] Linking target app/dpdk-dumpcap 00:02:04.635 [704/719] Linking target app/dpdk-test-regex 00:02:04.635 [705/719] Linking target app/dpdk-test-cmdline 00:02:04.635 [706/719] Linking target app/dpdk-test-sad 00:02:04.635 [707/719] Linking target app/dpdk-test-gpudev 00:02:04.635 [708/719] Linking target app/dpdk-test-flow-perf 00:02:04.635 [709/719] Linking target app/dpdk-test-fib 00:02:04.635 [710/719] Linking target app/dpdk-test-dma-perf 00:02:04.635 [711/719] Linking target app/dpdk-test-pipeline 00:02:04.635 [712/719] Linking target app/dpdk-test-security-perf 00:02:04.635 [713/719] Linking target app/dpdk-test-mldev 00:02:04.635 [714/719] Linking target app/dpdk-test-eventdev 00:02:04.635 [715/719] Linking target app/dpdk-testpmd 00:02:06.013 [716/719] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.273 [717/719] Linking target lib/librte_vhost.so.24.2 00:02:09.571 [718/719] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.571 [719/719] Linking target lib/librte_pipeline.so.24.2 00:02:09.571 02:42:59 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 install 00:02:09.571 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:02:09.571 [0/1] Installing files. 00:02:09.571 Installing subdir /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.571 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.572 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.573 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.574 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.575 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.576 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:09.577 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:09.577 Installing lib/librte_log.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_kvargs.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_argparse.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_telemetry.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_eal.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_rcu.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_mempool.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_mbuf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_net.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_meter.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_ethdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_cmdline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_metrics.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_hash.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_timer.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_acl.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_bbdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_bpf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_compressdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.577 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_distributor.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_dmadev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_efd.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_eventdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_gpudev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_gro.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.578 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_gso.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_jobstats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_latencystats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_lpm.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_member.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pcapng.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_power.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_rawdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_regexdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_mldev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_rib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_reorder.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_sched.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_security.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_stack.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_vhost.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_ipsec.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pdcp.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_fib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_port.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pdump.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_table.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pipeline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_graph.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_node.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:09.840 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:09.840 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:09.840 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:09.840 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:09.840 Installing app/dpdk-dumpcap to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-graph to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-pdump to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-proc-info to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-acl to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-fib to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-mldev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-testpmd to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-regex to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-sad to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.840 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.841 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.842 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:09.843 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.105 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:02:10.106 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:02:10.106 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:10.106 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_log.so 00:02:10.106 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:10.106 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:10.106 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:10.106 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:10.106 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:10.106 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:10.106 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:10.106 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:10.106 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:10.106 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:10.106 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:10.106 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:10.106 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:10.106 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:10.106 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:10.106 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:10.106 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:10.106 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so 00:02:10.106 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:10.106 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:10.106 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:10.107 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:10.107 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:10.107 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:10.107 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:10.107 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:10.107 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:10.107 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:10.107 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:10.107 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:10.107 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:10.107 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:10.107 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:10.107 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:10.107 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:10.107 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:10.107 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:10.107 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:10.107 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:10.107 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:10.107 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:10.107 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:10.107 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:10.107 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:10.107 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:10.107 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:10.107 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:10.107 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:10.107 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:10.107 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:10.107 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:10.107 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:10.107 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:10.107 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:10.107 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:10.107 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:10.107 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:10.107 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:10.107 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:10.107 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:10.107 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:10.107 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:10.107 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:10.107 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:10.107 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:10.107 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:10.107 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:10.107 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:10.107 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:10.107 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:10.107 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:10.107 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so 00:02:10.107 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:10.107 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:10.107 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:10.107 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so 00:02:10.107 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:10.107 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:10.107 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:10.107 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:10.107 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:10.107 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:10.107 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:10.107 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:10.107 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:10.107 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:10.107 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:10.107 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:10.107 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:10.107 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so 00:02:10.107 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:10.107 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:10.107 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:10.107 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:10.107 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:10.107 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:10.107 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:10.107 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:10.107 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:10.107 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:10.107 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:10.107 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so 00:02:10.107 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:10.107 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:10.107 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:10.107 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:10.107 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:10.107 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:10.107 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:10.107 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:10.107 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:10.107 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:10.107 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:10.108 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:10.108 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:10.108 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:10.108 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:10.108 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so 00:02:10.108 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:10.108 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:10.108 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:10.108 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:10.108 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:10.108 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so 00:02:10.108 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:10.108 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:10.108 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:10.108 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:10.108 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:10.108 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:10.108 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:10.108 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:10.108 Running custom install script '/bin/sh /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:10.108 02:43:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:10.108 02:43:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:10.108 02:43:00 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:10.108 02:43:00 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:10.108 00:02:10.108 real 0m27.970s 00:02:10.108 user 8m12.221s 00:02:10.108 sys 2m31.498s 00:02:10.108 02:43:00 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:10.108 02:43:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:10.108 ************************************ 00:02:10.108 END TEST build_native_dpdk 00:02:10.108 ************************************ 00:02:10.108 02:43:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.108 02:43:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.108 02:43:00 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:10.108 02:43:00 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:10.108 02:43:00 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:10.108 02:43:00 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:10.108 02:43:00 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:10.108 02:43:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.108 ************************************ 00:02:10.108 START TEST autobuild_llvm_precompile 00:02:10.108 ************************************ 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:02:10.108 Target: x86_64-redhat-linux-gnu 00:02:10.108 Thread model: posix 00:02:10.108 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:02:10.108 02:43:00 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:10.367 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:10.627 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:10.627 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:10.627 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:11.196 Using 'verbs' RDMA provider 00:02:27.021 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:41.911 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:41.911 Creating mk/config.mk...done. 00:02:41.911 Creating mk/cc.flags.mk...done. 00:02:41.911 Type 'make' to build. 00:02:41.911 00:02:41.911 real 0m30.270s 00:02:41.911 user 0m12.880s 00:02:41.911 sys 0m16.766s 00:02:41.911 02:43:31 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:41.911 02:43:31 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:41.911 ************************************ 00:02:41.911 END TEST autobuild_llvm_precompile 00:02:41.912 ************************************ 00:02:41.912 02:43:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.912 02:43:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.912 02:43:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.912 02:43:31 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:41.912 02:43:31 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:41.912 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.912 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:41.912 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:41.912 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:41.912 Using 'verbs' RDMA provider 00:02:54.685 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:06.886 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:06.886 Creating mk/config.mk...done. 00:03:06.886 Creating mk/cc.flags.mk...done. 00:03:06.886 Type 'make' to build. 00:03:06.886 02:43:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:03:06.886 02:43:56 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:06.886 02:43:56 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:06.886 02:43:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.886 ************************************ 00:03:06.886 START TEST make 00:03:06.886 ************************************ 00:03:06.886 02:43:56 make -- common/autotest_common.sh@1121 -- $ make -j112 00:03:06.886 make[1]: Nothing to be done for 'all'. 00:03:07.821 The Meson build system 00:03:07.821 Version: 1.3.1 00:03:07.821 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:07.822 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.822 Build type: native build 00:03:07.822 Project name: libvfio-user 00:03:07.822 Project version: 0.0.1 00:03:07.822 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:07.822 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:07.822 Host machine cpu family: x86_64 00:03:07.822 Host machine cpu: x86_64 00:03:07.822 Run-time dependency threads found: YES 00:03:07.822 Library dl found: YES 00:03:07.822 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:07.822 Run-time dependency json-c found: YES 0.17 00:03:07.822 Run-time dependency cmocka found: YES 1.1.7 00:03:07.822 Program pytest-3 found: NO 00:03:07.822 Program flake8 found: NO 00:03:07.822 Program misspell-fixer found: NO 00:03:07.822 Program restructuredtext-lint found: NO 00:03:07.822 Program valgrind found: YES (/usr/bin/valgrind) 00:03:07.822 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.822 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.822 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.822 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.822 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:07.822 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:07.822 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.822 Build targets in project: 8 00:03:07.822 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:07.822 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:07.822 00:03:07.822 libvfio-user 0.0.1 00:03:07.822 00:03:07.822 User defined options 00:03:07.822 buildtype : debug 00:03:07.822 default_library: static 00:03:07.822 libdir : /usr/local/lib 00:03:07.822 00:03:07.822 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.080 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.080 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:08.080 [2/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:08.080 [3/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:08.080 [4/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.080 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:08.080 [6/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.080 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.339 [8/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.339 [9/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.339 [10/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.339 [11/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.339 [12/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:08.339 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.339 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.339 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:08.339 [16/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.339 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:08.339 [18/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.339 [19/36] Compiling C object samples/server.p/server.c.o 00:03:08.339 [20/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.339 [21/36] Compiling C object samples/null.p/null.c.o 00:03:08.339 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.339 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.339 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.339 [25/36] Compiling C object samples/client.p/client.c.o 00:03:08.339 [26/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.339 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:08.339 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.339 [29/36] Linking static target lib/libvfio-user.a 00:03:08.339 [30/36] Linking target samples/client 00:03:08.339 [31/36] Linking target test/unit_tests 00:03:08.339 [32/36] Linking target samples/lspci 00:03:08.339 [33/36] Linking target samples/null 00:03:08.339 [34/36] Linking target samples/server 00:03:08.339 [35/36] Linking target samples/gpio-pci-idio-16 00:03:08.339 [36/36] Linking target samples/shadow_ioeventfd_server 00:03:08.339 INFO: autodetecting backend as ninja 00:03:08.339 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.339 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.907 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.907 ninja: no work to do. 00:03:12.195 CC lib/ut_mock/mock.o 00:03:12.195 CC lib/ut/ut.o 00:03:12.195 CC lib/log/log.o 00:03:12.195 CC lib/log/log_flags.o 00:03:12.195 CC lib/log/log_deprecated.o 00:03:12.195 LIB libspdk_ut_mock.a 00:03:12.195 LIB libspdk_ut.a 00:03:12.195 LIB libspdk_log.a 00:03:12.454 CC lib/ioat/ioat.o 00:03:12.454 CC lib/dma/dma.o 00:03:12.454 CXX lib/trace_parser/trace.o 00:03:12.454 CC lib/util/base64.o 00:03:12.454 CC lib/util/cpuset.o 00:03:12.454 CC lib/util/bit_array.o 00:03:12.454 CC lib/util/crc32.o 00:03:12.454 CC lib/util/crc32c.o 00:03:12.454 CC lib/util/crc16.o 00:03:12.454 CC lib/util/crc64.o 00:03:12.454 CC lib/util/dif.o 00:03:12.454 CC lib/util/crc32_ieee.o 00:03:12.454 CC lib/util/file.o 00:03:12.454 CC lib/util/fd.o 00:03:12.454 CC lib/util/hexlify.o 00:03:12.454 CC lib/util/iov.o 00:03:12.454 CC lib/util/math.o 00:03:12.454 CC lib/util/pipe.o 00:03:12.454 CC lib/util/strerror_tls.o 00:03:12.454 CC lib/util/fd_group.o 00:03:12.454 CC lib/util/string.o 00:03:12.454 CC lib/util/uuid.o 00:03:12.454 CC lib/util/zipf.o 00:03:12.454 CC lib/util/xor.o 00:03:12.713 CC lib/vfio_user/host/vfio_user_pci.o 00:03:12.713 CC lib/vfio_user/host/vfio_user.o 00:03:12.713 LIB libspdk_dma.a 00:03:12.713 LIB libspdk_ioat.a 00:03:12.713 LIB libspdk_vfio_user.a 00:03:12.713 LIB libspdk_util.a 00:03:12.972 LIB libspdk_trace_parser.a 00:03:12.972 CC lib/env_dpdk/env.o 00:03:12.972 CC lib/env_dpdk/memory.o 00:03:12.972 CC lib/env_dpdk/pci.o 00:03:12.972 CC lib/env_dpdk/init.o 00:03:12.972 CC lib/env_dpdk/threads.o 00:03:12.972 CC lib/env_dpdk/pci_ioat.o 00:03:12.972 CC lib/env_dpdk/pci_virtio.o 00:03:12.972 CC lib/env_dpdk/pci_vmd.o 00:03:12.972 CC lib/env_dpdk/pci_idxd.o 00:03:12.972 CC lib/env_dpdk/sigbus_handler.o 00:03:12.972 CC lib/env_dpdk/pci_event.o 00:03:12.972 CC lib/env_dpdk/pci_dpdk.o 00:03:12.972 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.972 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.972 CC lib/vmd/vmd.o 00:03:12.972 CC lib/rdma/common.o 00:03:12.972 CC lib/vmd/led.o 00:03:12.972 CC lib/rdma/rdma_verbs.o 00:03:13.230 CC lib/idxd/idxd_user.o 00:03:13.230 CC lib/idxd/idxd.o 00:03:13.230 CC lib/conf/conf.o 00:03:13.230 CC lib/json/json_parse.o 00:03:13.230 CC lib/json/json_util.o 00:03:13.230 CC lib/json/json_write.o 00:03:13.230 LIB libspdk_rdma.a 00:03:13.230 LIB libspdk_conf.a 00:03:13.230 LIB libspdk_json.a 00:03:13.488 LIB libspdk_idxd.a 00:03:13.488 LIB libspdk_vmd.a 00:03:13.747 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.747 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.747 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.747 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:13.747 LIB libspdk_jsonrpc.a 00:03:14.006 LIB libspdk_env_dpdk.a 00:03:14.006 CC lib/rpc/rpc.o 00:03:14.265 LIB libspdk_rpc.a 00:03:14.523 CC lib/keyring/keyring_rpc.o 00:03:14.523 CC lib/keyring/keyring.o 00:03:14.523 CC lib/notify/notify.o 00:03:14.523 CC lib/notify/notify_rpc.o 00:03:14.523 CC lib/trace/trace.o 00:03:14.523 CC lib/trace/trace_flags.o 00:03:14.523 CC lib/trace/trace_rpc.o 00:03:14.782 LIB libspdk_notify.a 00:03:14.782 LIB libspdk_keyring.a 00:03:14.782 LIB libspdk_trace.a 00:03:15.040 CC lib/sock/sock.o 00:03:15.040 CC lib/sock/sock_rpc.o 00:03:15.040 CC lib/thread/thread.o 00:03:15.040 CC lib/thread/iobuf.o 00:03:15.298 LIB libspdk_sock.a 00:03:15.556 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.556 CC lib/nvme/nvme_ctrlr.o 00:03:15.556 CC lib/nvme/nvme_fabric.o 00:03:15.556 CC lib/nvme/nvme_ns_cmd.o 00:03:15.556 CC lib/nvme/nvme_ns.o 00:03:15.556 CC lib/nvme/nvme_pcie_common.o 00:03:15.556 CC lib/nvme/nvme_pcie.o 00:03:15.556 CC lib/nvme/nvme_qpair.o 00:03:15.556 CC lib/nvme/nvme.o 00:03:15.556 CC lib/nvme/nvme_quirks.o 00:03:15.556 CC lib/nvme/nvme_transport.o 00:03:15.556 CC lib/nvme/nvme_discovery.o 00:03:15.556 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.556 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.556 CC lib/nvme/nvme_tcp.o 00:03:15.557 CC lib/nvme/nvme_opal.o 00:03:15.557 CC lib/nvme/nvme_io_msg.o 00:03:15.557 CC lib/nvme/nvme_poll_group.o 00:03:15.557 CC lib/nvme/nvme_zns.o 00:03:15.557 CC lib/nvme/nvme_stubs.o 00:03:15.557 CC lib/nvme/nvme_auth.o 00:03:15.557 CC lib/nvme/nvme_cuse.o 00:03:15.557 CC lib/nvme/nvme_vfio_user.o 00:03:15.557 CC lib/nvme/nvme_rdma.o 00:03:15.815 LIB libspdk_thread.a 00:03:16.074 CC lib/blob/blobstore.o 00:03:16.074 CC lib/blob/request.o 00:03:16.074 CC lib/blob/zeroes.o 00:03:16.074 CC lib/blob/blob_bs_dev.o 00:03:16.074 CC lib/vfu_tgt/tgt_rpc.o 00:03:16.074 CC lib/accel/accel.o 00:03:16.074 CC lib/vfu_tgt/tgt_endpoint.o 00:03:16.074 CC lib/accel/accel_sw.o 00:03:16.074 CC lib/accel/accel_rpc.o 00:03:16.074 CC lib/init/json_config.o 00:03:16.074 CC lib/init/subsystem.o 00:03:16.074 CC lib/init/subsystem_rpc.o 00:03:16.074 CC lib/virtio/virtio.o 00:03:16.074 CC lib/virtio/virtio_vhost_user.o 00:03:16.074 CC lib/init/rpc.o 00:03:16.074 CC lib/virtio/virtio_vfio_user.o 00:03:16.074 CC lib/virtio/virtio_pci.o 00:03:16.333 LIB libspdk_init.a 00:03:16.333 LIB libspdk_virtio.a 00:03:16.333 LIB libspdk_vfu_tgt.a 00:03:16.592 CC lib/event/app.o 00:03:16.592 CC lib/event/reactor.o 00:03:16.592 CC lib/event/log_rpc.o 00:03:16.592 CC lib/event/app_rpc.o 00:03:16.592 CC lib/event/scheduler_static.o 00:03:16.851 LIB libspdk_accel.a 00:03:16.851 LIB libspdk_nvme.a 00:03:16.851 LIB libspdk_event.a 00:03:17.110 CC lib/bdev/bdev.o 00:03:17.110 CC lib/bdev/bdev_zone.o 00:03:17.110 CC lib/bdev/bdev_rpc.o 00:03:17.110 CC lib/bdev/part.o 00:03:17.110 CC lib/bdev/scsi_nvme.o 00:03:17.720 LIB libspdk_blob.a 00:03:18.045 CC lib/blobfs/blobfs.o 00:03:18.045 CC lib/blobfs/tree.o 00:03:18.045 CC lib/lvol/lvol.o 00:03:18.612 LIB libspdk_lvol.a 00:03:18.612 LIB libspdk_blobfs.a 00:03:18.870 LIB libspdk_bdev.a 00:03:19.128 CC lib/ublk/ublk.o 00:03:19.128 CC lib/ublk/ublk_rpc.o 00:03:19.128 CC lib/nvmf/ctrlr.o 00:03:19.128 CC lib/nvmf/ctrlr_discovery.o 00:03:19.128 CC lib/nvmf/ctrlr_bdev.o 00:03:19.128 CC lib/nvmf/nvmf.o 00:03:19.128 CC lib/nvmf/subsystem.o 00:03:19.128 CC lib/nvmf/transport.o 00:03:19.128 CC lib/nvmf/tcp.o 00:03:19.128 CC lib/nvmf/nvmf_rpc.o 00:03:19.128 CC lib/nvmf/stubs.o 00:03:19.128 CC lib/nvmf/vfio_user.o 00:03:19.128 CC lib/nvmf/rdma.o 00:03:19.128 CC lib/nvmf/auth.o 00:03:19.128 CC lib/scsi/port.o 00:03:19.128 CC lib/scsi/dev.o 00:03:19.128 CC lib/nbd/nbd.o 00:03:19.128 CC lib/ftl/ftl_core.o 00:03:19.128 CC lib/scsi/lun.o 00:03:19.128 CC lib/nbd/nbd_rpc.o 00:03:19.128 CC lib/scsi/scsi.o 00:03:19.128 CC lib/ftl/ftl_init.o 00:03:19.128 CC lib/scsi/scsi_bdev.o 00:03:19.128 CC lib/ftl/ftl_layout.o 00:03:19.128 CC lib/scsi/scsi_pr.o 00:03:19.128 CC lib/ftl/ftl_debug.o 00:03:19.128 CC lib/ftl/ftl_io.o 00:03:19.128 CC lib/scsi/scsi_rpc.o 00:03:19.128 CC lib/ftl/ftl_sb.o 00:03:19.128 CC lib/ftl/ftl_band.o 00:03:19.128 CC lib/scsi/task.o 00:03:19.128 CC lib/ftl/ftl_l2p.o 00:03:19.128 CC lib/ftl/ftl_nv_cache.o 00:03:19.128 CC lib/ftl/ftl_l2p_flat.o 00:03:19.128 CC lib/ftl/ftl_band_ops.o 00:03:19.128 CC lib/ftl/ftl_writer.o 00:03:19.128 CC lib/ftl/ftl_rq.o 00:03:19.128 CC lib/ftl/ftl_p2l.o 00:03:19.128 CC lib/ftl/ftl_reloc.o 00:03:19.128 CC lib/ftl/ftl_l2p_cache.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.128 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.128 CC lib/ftl/utils/ftl_conf.o 00:03:19.128 CC lib/ftl/utils/ftl_md.o 00:03:19.128 CC lib/ftl/utils/ftl_mempool.o 00:03:19.128 CC lib/ftl/utils/ftl_bitmap.o 00:03:19.128 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.128 CC lib/ftl/utils/ftl_property.o 00:03:19.128 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.128 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.128 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.128 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.128 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.128 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.128 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.128 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.128 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.128 CC lib/ftl/base/ftl_base_dev.o 00:03:19.128 CC lib/ftl/ftl_trace.o 00:03:19.128 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.697 LIB libspdk_nbd.a 00:03:19.697 LIB libspdk_scsi.a 00:03:19.697 LIB libspdk_ublk.a 00:03:19.697 LIB libspdk_ftl.a 00:03:19.956 CC lib/vhost/vhost_scsi.o 00:03:19.956 CC lib/vhost/vhost.o 00:03:19.956 CC lib/vhost/vhost_blk.o 00:03:19.956 CC lib/vhost/vhost_rpc.o 00:03:19.956 CC lib/vhost/rte_vhost_user.o 00:03:19.956 CC lib/iscsi/conn.o 00:03:19.956 CC lib/iscsi/init_grp.o 00:03:19.956 CC lib/iscsi/iscsi.o 00:03:19.956 CC lib/iscsi/md5.o 00:03:19.956 CC lib/iscsi/param.o 00:03:19.956 CC lib/iscsi/portal_grp.o 00:03:19.956 CC lib/iscsi/tgt_node.o 00:03:19.956 CC lib/iscsi/iscsi_subsystem.o 00:03:19.956 CC lib/iscsi/iscsi_rpc.o 00:03:19.956 CC lib/iscsi/task.o 00:03:20.215 LIB libspdk_nvmf.a 00:03:20.474 LIB libspdk_vhost.a 00:03:20.733 LIB libspdk_iscsi.a 00:03:21.303 CC module/vfu_device/vfu_virtio.o 00:03:21.303 CC module/vfu_device/vfu_virtio_scsi.o 00:03:21.303 CC module/vfu_device/vfu_virtio_blk.o 00:03:21.303 CC module/vfu_device/vfu_virtio_rpc.o 00:03:21.303 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.303 CC module/accel/ioat/accel_ioat.o 00:03:21.303 CC module/accel/ioat/accel_ioat_rpc.o 00:03:21.303 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.303 CC module/blob/bdev/blob_bdev.o 00:03:21.303 CC module/accel/error/accel_error.o 00:03:21.303 CC module/accel/error/accel_error_rpc.o 00:03:21.303 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.303 LIB libspdk_env_dpdk_rpc.a 00:03:21.303 CC module/accel/iaa/accel_iaa.o 00:03:21.303 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.303 CC module/accel/dsa/accel_dsa.o 00:03:21.303 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.303 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.303 CC module/keyring/file/keyring.o 00:03:21.303 CC module/keyring/file/keyring_rpc.o 00:03:21.303 CC module/sock/posix/posix.o 00:03:21.303 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.303 LIB libspdk_scheduler_gscheduler.a 00:03:21.303 LIB libspdk_accel_ioat.a 00:03:21.303 LIB libspdk_accel_error.a 00:03:21.303 LIB libspdk_keyring_file.a 00:03:21.303 LIB libspdk_scheduler_dynamic.a 00:03:21.303 LIB libspdk_accel_iaa.a 00:03:21.563 LIB libspdk_blob_bdev.a 00:03:21.563 LIB libspdk_accel_dsa.a 00:03:21.563 LIB libspdk_vfu_device.a 00:03:21.822 LIB libspdk_sock_posix.a 00:03:21.822 CC module/bdev/null/bdev_null.o 00:03:21.822 CC module/bdev/null/bdev_null_rpc.o 00:03:21.822 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.822 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.822 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:21.822 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:21.822 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:21.822 CC module/bdev/gpt/gpt.o 00:03:21.822 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.822 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.822 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.822 CC module/bdev/aio/bdev_aio.o 00:03:21.822 CC module/bdev/aio/bdev_aio_rpc.o 00:03:21.822 CC module/bdev/delay/vbdev_delay.o 00:03:21.822 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.822 CC module/bdev/malloc/bdev_malloc.o 00:03:21.822 CC module/bdev/error/vbdev_error.o 00:03:21.822 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.822 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.822 CC module/bdev/split/vbdev_split.o 00:03:21.822 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.822 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.822 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.822 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.822 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.822 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:21.822 CC module/bdev/ftl/bdev_ftl.o 00:03:21.822 CC module/bdev/raid/bdev_raid_rpc.o 00:03:21.822 CC module/bdev/raid/bdev_raid.o 00:03:21.822 CC module/bdev/raid/raid0.o 00:03:21.822 CC module/bdev/raid/bdev_raid_sb.o 00:03:21.822 CC module/bdev/raid/raid1.o 00:03:21.822 CC module/bdev/raid/concat.o 00:03:21.822 CC module/bdev/iscsi/bdev_iscsi.o 00:03:21.822 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:21.822 CC module/bdev/nvme/bdev_nvme.o 00:03:21.822 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.822 CC module/bdev/nvme/nvme_rpc.o 00:03:21.822 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:21.822 CC module/bdev/nvme/bdev_mdns_client.o 00:03:21.822 CC module/bdev/nvme/vbdev_opal.o 00:03:21.822 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.081 LIB libspdk_blobfs_bdev.a 00:03:22.081 LIB libspdk_bdev_split.a 00:03:22.081 LIB libspdk_bdev_null.a 00:03:22.081 LIB libspdk_bdev_gpt.a 00:03:22.081 LIB libspdk_bdev_error.a 00:03:22.081 LIB libspdk_bdev_ftl.a 00:03:22.081 LIB libspdk_bdev_passthru.a 00:03:22.081 LIB libspdk_bdev_zone_block.a 00:03:22.081 LIB libspdk_bdev_aio.a 00:03:22.081 LIB libspdk_bdev_delay.a 00:03:22.081 LIB libspdk_bdev_malloc.a 00:03:22.081 LIB libspdk_bdev_iscsi.a 00:03:22.081 LIB libspdk_bdev_lvol.a 00:03:22.341 LIB libspdk_bdev_virtio.a 00:03:22.341 LIB libspdk_bdev_raid.a 00:03:23.278 LIB libspdk_bdev_nvme.a 00:03:23.846 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:23.846 CC module/event/subsystems/vmd/vmd.o 00:03:23.846 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:23.846 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.846 CC module/event/subsystems/sock/sock.o 00:03:23.846 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.846 CC module/event/subsystems/keyring/keyring.o 00:03:23.846 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:23.846 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.846 LIB libspdk_event_vhost_blk.a 00:03:23.846 LIB libspdk_event_sock.a 00:03:23.846 LIB libspdk_event_vmd.a 00:03:23.846 LIB libspdk_event_keyring.a 00:03:23.846 LIB libspdk_event_vfu_tgt.a 00:03:23.846 LIB libspdk_event_scheduler.a 00:03:23.846 LIB libspdk_event_iobuf.a 00:03:24.105 CC module/event/subsystems/accel/accel.o 00:03:24.365 LIB libspdk_event_accel.a 00:03:24.625 CC module/event/subsystems/bdev/bdev.o 00:03:24.625 LIB libspdk_event_bdev.a 00:03:24.885 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.885 CC module/event/subsystems/nbd/nbd.o 00:03:24.885 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.144 CC module/event/subsystems/ublk/ublk.o 00:03:25.144 CC module/event/subsystems/scsi/scsi.o 00:03:25.144 LIB libspdk_event_nbd.a 00:03:25.144 LIB libspdk_event_ublk.a 00:03:25.144 LIB libspdk_event_scsi.a 00:03:25.144 LIB libspdk_event_nvmf.a 00:03:25.403 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:25.403 CC module/event/subsystems/iscsi/iscsi.o 00:03:25.663 LIB libspdk_event_vhost_scsi.a 00:03:25.663 LIB libspdk_event_iscsi.a 00:03:25.929 TEST_HEADER include/spdk/accel.h 00:03:25.929 TEST_HEADER include/spdk/assert.h 00:03:25.929 TEST_HEADER include/spdk/accel_module.h 00:03:25.929 TEST_HEADER include/spdk/barrier.h 00:03:25.929 TEST_HEADER include/spdk/base64.h 00:03:25.929 TEST_HEADER include/spdk/bdev_module.h 00:03:25.929 TEST_HEADER include/spdk/bdev.h 00:03:25.929 TEST_HEADER include/spdk/bdev_zone.h 00:03:25.929 TEST_HEADER include/spdk/bit_array.h 00:03:25.929 TEST_HEADER include/spdk/blob_bdev.h 00:03:25.929 TEST_HEADER include/spdk/bit_pool.h 00:03:25.929 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:25.929 TEST_HEADER include/spdk/blobfs.h 00:03:25.929 CC app/spdk_nvme_identify/identify.o 00:03:25.929 TEST_HEADER include/spdk/blob.h 00:03:25.929 TEST_HEADER include/spdk/conf.h 00:03:25.929 TEST_HEADER include/spdk/config.h 00:03:25.929 CXX app/trace/trace.o 00:03:25.929 TEST_HEADER include/spdk/cpuset.h 00:03:25.929 TEST_HEADER include/spdk/crc16.h 00:03:25.929 TEST_HEADER include/spdk/crc64.h 00:03:25.929 TEST_HEADER include/spdk/dif.h 00:03:25.929 TEST_HEADER include/spdk/crc32.h 00:03:25.929 TEST_HEADER include/spdk/dma.h 00:03:25.929 TEST_HEADER include/spdk/endian.h 00:03:25.929 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.929 TEST_HEADER include/spdk/event.h 00:03:25.929 TEST_HEADER include/spdk/env.h 00:03:25.929 CC app/spdk_nvme_perf/perf.o 00:03:25.929 CC app/trace_record/trace_record.o 00:03:25.929 TEST_HEADER include/spdk/fd.h 00:03:25.929 TEST_HEADER include/spdk/fd_group.h 00:03:25.929 CC test/rpc_client/rpc_client_test.o 00:03:25.929 TEST_HEADER include/spdk/file.h 00:03:25.929 CC app/spdk_lspci/spdk_lspci.o 00:03:25.929 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.929 TEST_HEADER include/spdk/ftl.h 00:03:25.929 TEST_HEADER include/spdk/hexlify.h 00:03:25.929 TEST_HEADER include/spdk/histogram_data.h 00:03:25.929 TEST_HEADER include/spdk/idxd.h 00:03:25.929 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.929 TEST_HEADER include/spdk/ioat.h 00:03:25.929 TEST_HEADER include/spdk/init.h 00:03:25.929 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.929 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.929 TEST_HEADER include/spdk/json.h 00:03:25.929 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.929 TEST_HEADER include/spdk/keyring.h 00:03:25.929 TEST_HEADER include/spdk/keyring_module.h 00:03:25.929 TEST_HEADER include/spdk/likely.h 00:03:25.929 TEST_HEADER include/spdk/log.h 00:03:25.929 TEST_HEADER include/spdk/lvol.h 00:03:25.929 TEST_HEADER include/spdk/memory.h 00:03:25.929 TEST_HEADER include/spdk/mmio.h 00:03:25.929 CC app/spdk_nvme_discover/discovery_aer.o 00:03:25.929 TEST_HEADER include/spdk/nbd.h 00:03:25.929 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.929 TEST_HEADER include/spdk/nvme.h 00:03:25.929 TEST_HEADER include/spdk/notify.h 00:03:25.929 CC app/spdk_top/spdk_top.o 00:03:25.929 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.929 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.929 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.929 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.929 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.929 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.929 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.929 TEST_HEADER include/spdk/nvmf.h 00:03:25.929 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.929 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.929 TEST_HEADER include/spdk/opal.h 00:03:25.929 TEST_HEADER include/spdk/opal_spec.h 00:03:25.929 TEST_HEADER include/spdk/pci_ids.h 00:03:25.929 TEST_HEADER include/spdk/pipe.h 00:03:25.929 TEST_HEADER include/spdk/queue.h 00:03:25.929 TEST_HEADER include/spdk/reduce.h 00:03:25.929 TEST_HEADER include/spdk/rpc.h 00:03:25.929 TEST_HEADER include/spdk/scheduler.h 00:03:25.929 TEST_HEADER include/spdk/scsi.h 00:03:25.929 TEST_HEADER include/spdk/sock.h 00:03:25.929 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.929 TEST_HEADER include/spdk/stdinc.h 00:03:25.929 TEST_HEADER include/spdk/string.h 00:03:25.929 TEST_HEADER include/spdk/thread.h 00:03:25.929 TEST_HEADER include/spdk/trace.h 00:03:25.929 TEST_HEADER include/spdk/trace_parser.h 00:03:25.929 TEST_HEADER include/spdk/ublk.h 00:03:25.929 TEST_HEADER include/spdk/tree.h 00:03:25.929 CC app/spdk_dd/spdk_dd.o 00:03:25.929 TEST_HEADER include/spdk/util.h 00:03:25.929 CC app/vhost/vhost.o 00:03:25.929 TEST_HEADER include/spdk/uuid.h 00:03:25.929 TEST_HEADER include/spdk/version.h 00:03:25.929 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.929 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.929 TEST_HEADER include/spdk/vhost.h 00:03:25.929 TEST_HEADER include/spdk/xor.h 00:03:25.929 TEST_HEADER include/spdk/vmd.h 00:03:25.929 TEST_HEADER include/spdk/zipf.h 00:03:25.929 CXX test/cpp_headers/accel_module.o 00:03:25.929 CXX test/cpp_headers/accel.o 00:03:25.929 CXX test/cpp_headers/barrier.o 00:03:25.929 CXX test/cpp_headers/assert.o 00:03:25.929 CC app/nvmf_tgt/nvmf_main.o 00:03:25.929 CXX test/cpp_headers/base64.o 00:03:25.929 CXX test/cpp_headers/bdev_module.o 00:03:25.929 CXX test/cpp_headers/bdev_zone.o 00:03:25.929 CXX test/cpp_headers/bdev.o 00:03:25.929 CXX test/cpp_headers/bit_array.o 00:03:25.929 CXX test/cpp_headers/bit_pool.o 00:03:25.929 CXX test/cpp_headers/blob_bdev.o 00:03:25.929 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.929 CXX test/cpp_headers/blob.o 00:03:25.929 CXX test/cpp_headers/blobfs.o 00:03:25.929 CXX test/cpp_headers/conf.o 00:03:25.929 CXX test/cpp_headers/config.o 00:03:25.929 CXX test/cpp_headers/crc16.o 00:03:25.929 CXX test/cpp_headers/cpuset.o 00:03:25.929 CXX test/cpp_headers/crc32.o 00:03:25.929 CC app/iscsi_tgt/iscsi_tgt.o 00:03:25.929 CXX test/cpp_headers/crc64.o 00:03:25.929 CXX test/cpp_headers/dif.o 00:03:25.929 CXX test/cpp_headers/dma.o 00:03:25.929 CXX test/cpp_headers/endian.o 00:03:25.929 CXX test/cpp_headers/env_dpdk.o 00:03:25.929 CXX test/cpp_headers/env.o 00:03:25.929 CXX test/cpp_headers/event.o 00:03:25.929 CXX test/cpp_headers/fd_group.o 00:03:25.929 CC test/event/reactor_perf/reactor_perf.o 00:03:25.929 CXX test/cpp_headers/file.o 00:03:25.929 CXX test/cpp_headers/fd.o 00:03:25.929 CXX test/cpp_headers/ftl.o 00:03:25.929 CXX test/cpp_headers/gpt_spec.o 00:03:25.929 CXX test/cpp_headers/hexlify.o 00:03:25.929 CC test/event/event_perf/event_perf.o 00:03:25.929 CXX test/cpp_headers/histogram_data.o 00:03:25.929 CXX test/cpp_headers/idxd.o 00:03:25.929 CXX test/cpp_headers/idxd_spec.o 00:03:25.929 CXX test/cpp_headers/init.o 00:03:25.929 CC test/event/reactor/reactor.o 00:03:25.929 CC test/app/histogram_perf/histogram_perf.o 00:03:25.929 CC test/app/jsoncat/jsoncat.o 00:03:25.929 CC test/thread/lock/spdk_lock.o 00:03:25.929 CC test/env/vtophys/vtophys.o 00:03:25.929 CC test/app/stub/stub.o 00:03:25.929 CC test/event/scheduler/scheduler.o 00:03:25.929 CC test/event/app_repeat/app_repeat.o 00:03:25.929 CC examples/sock/hello_world/hello_sock.o 00:03:25.929 CC app/spdk_tgt/spdk_tgt.o 00:03:25.929 CC test/thread/poller_perf/poller_perf.o 00:03:26.201 CC test/nvme/reset/reset.o 00:03:26.201 CC test/bdev/bdevio/bdevio.o 00:03:26.201 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:26.201 CC test/nvme/err_injection/err_injection.o 00:03:26.201 CC test/nvme/aer/aer.o 00:03:26.201 CC test/nvme/sgl/sgl.o 00:03:26.201 CC test/nvme/boot_partition/boot_partition.o 00:03:26.201 CC examples/nvme/arbitration/arbitration.o 00:03:26.201 CC test/env/memory/memory_ut.o 00:03:26.201 CC test/nvme/startup/startup.o 00:03:26.201 CC test/nvme/e2edp/nvme_dp.o 00:03:26.201 CC test/nvme/fused_ordering/fused_ordering.o 00:03:26.201 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.201 CC test/env/pci/pci_ut.o 00:03:26.201 CC examples/nvme/hello_world/hello_world.o 00:03:26.201 CC test/nvme/overhead/overhead.o 00:03:26.201 CC examples/vmd/lsvmd/lsvmd.o 00:03:26.201 CC test/nvme/reserve/reserve.o 00:03:26.201 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.201 CC examples/vmd/led/led.o 00:03:26.201 CC test/nvme/connect_stress/connect_stress.o 00:03:26.201 CC examples/ioat/perf/perf.o 00:03:26.201 CC examples/accel/perf/accel_perf.o 00:03:26.201 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.201 CC test/nvme/compliance/nvme_compliance.o 00:03:26.201 CC test/nvme/cuse/cuse.o 00:03:26.201 CC examples/util/zipf/zipf.o 00:03:26.201 CC examples/ioat/verify/verify.o 00:03:26.201 CC test/nvme/fdp/fdp.o 00:03:26.201 CC examples/nvme/abort/abort.o 00:03:26.201 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.201 CC test/nvme/simple_copy/simple_copy.o 00:03:26.201 CC examples/nvme/reconnect/reconnect.o 00:03:26.201 CC examples/nvme/hotplug/hotplug.o 00:03:26.201 CC test/blobfs/mkfs/mkfs.o 00:03:26.201 CC examples/idxd/perf/perf.o 00:03:26.201 CC app/fio/nvme/fio_plugin.o 00:03:26.201 CC test/accel/dif/dif.o 00:03:26.201 CC test/dma/test_dma/test_dma.o 00:03:26.201 CC test/app/bdev_svc/bdev_svc.o 00:03:26.201 LINK spdk_lspci 00:03:26.201 CC examples/blob/cli/blobcli.o 00:03:26.201 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.201 CC examples/blob/hello_world/hello_blob.o 00:03:26.201 CC examples/nvmf/nvmf/nvmf.o 00:03:26.201 CC app/fio/bdev/fio_plugin.o 00:03:26.201 CC examples/thread/thread/thread_ex.o 00:03:26.201 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.201 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:26.201 CC test/lvol/esnap/esnap.o 00:03:26.201 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.201 LINK rpc_client_test 00:03:26.201 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:26.201 LINK spdk_nvme_discover 00:03:26.201 LINK interrupt_tgt 00:03:26.201 LINK reactor_perf 00:03:26.201 LINK event_perf 00:03:26.201 LINK reactor 00:03:26.201 LINK spdk_trace_record 00:03:26.201 LINK vhost 00:03:26.201 CXX test/cpp_headers/ioat.o 00:03:26.201 CXX test/cpp_headers/ioat_spec.o 00:03:26.201 CXX test/cpp_headers/iscsi_spec.o 00:03:26.201 CXX test/cpp_headers/json.o 00:03:26.201 CXX test/cpp_headers/jsonrpc.o 00:03:26.201 LINK jsoncat 00:03:26.201 LINK histogram_perf 00:03:26.201 CXX test/cpp_headers/keyring.o 00:03:26.201 CXX test/cpp_headers/keyring_module.o 00:03:26.201 LINK vtophys 00:03:26.201 CXX test/cpp_headers/likely.o 00:03:26.201 CXX test/cpp_headers/log.o 00:03:26.201 CXX test/cpp_headers/lvol.o 00:03:26.201 CXX test/cpp_headers/memory.o 00:03:26.201 CXX test/cpp_headers/mmio.o 00:03:26.201 CXX test/cpp_headers/nbd.o 00:03:26.201 CXX test/cpp_headers/notify.o 00:03:26.201 CXX test/cpp_headers/nvme.o 00:03:26.201 CXX test/cpp_headers/nvme_intel.o 00:03:26.201 CXX test/cpp_headers/nvme_ocssd.o 00:03:26.201 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.201 CXX test/cpp_headers/nvme_spec.o 00:03:26.201 LINK lsvmd 00:03:26.201 CXX test/cpp_headers/nvme_zns.o 00:03:26.201 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.201 LINK poller_perf 00:03:26.201 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.201 CXX test/cpp_headers/nvmf.o 00:03:26.201 CXX test/cpp_headers/nvmf_spec.o 00:03:26.201 CXX test/cpp_headers/nvmf_transport.o 00:03:26.201 LINK app_repeat 00:03:26.201 LINK nvmf_tgt 00:03:26.201 CXX test/cpp_headers/opal.o 00:03:26.201 CXX test/cpp_headers/opal_spec.o 00:03:26.201 LINK env_dpdk_post_init 00:03:26.201 CXX test/cpp_headers/pci_ids.o 00:03:26.201 CXX test/cpp_headers/pipe.o 00:03:26.201 LINK stub 00:03:26.201 LINK led 00:03:26.201 CXX test/cpp_headers/queue.o 00:03:26.201 CXX test/cpp_headers/reduce.o 00:03:26.201 LINK iscsi_tgt 00:03:26.201 LINK zipf 00:03:26.201 CXX test/cpp_headers/rpc.o 00:03:26.201 LINK boot_partition 00:03:26.201 CXX test/cpp_headers/scheduler.o 00:03:26.463 LINK startup 00:03:26.463 LINK connect_stress 00:03:26.463 LINK pmr_persistence 00:03:26.463 LINK err_injection 00:03:26.463 LINK doorbell_aers 00:03:26.463 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:26.463 struct spdk_nvme_fdp_ruhs ruhs; 00:03:26.463 ^ 00:03:26.463 LINK fused_ordering 00:03:26.463 LINK spdk_tgt 00:03:26.463 LINK reserve 00:03:26.463 CXX test/cpp_headers/scsi.o 00:03:26.463 LINK scheduler 00:03:26.463 LINK cmb_copy 00:03:26.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.463 LINK mkfs 00:03:26.463 LINK hello_sock 00:03:26.463 LINK ioat_perf 00:03:26.463 LINK verify 00:03:26.463 LINK hello_world 00:03:26.463 LINK bdev_svc 00:03:26.463 LINK simple_copy 00:03:26.463 LINK hotplug 00:03:26.464 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:26.464 LINK reset 00:03:26.464 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:26.464 LINK aer 00:03:26.464 LINK nvme_dp 00:03:26.464 LINK sgl 00:03:26.464 LINK overhead 00:03:26.464 CXX test/cpp_headers/scsi_spec.o 00:03:26.464 LINK fdp 00:03:26.464 LINK hello_bdev 00:03:26.464 CXX test/cpp_headers/sock.o 00:03:26.464 LINK hello_blob 00:03:26.464 LINK spdk_trace 00:03:26.464 CXX test/cpp_headers/stdinc.o 00:03:26.464 CXX test/cpp_headers/string.o 00:03:26.464 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.464 CXX test/cpp_headers/thread.o 00:03:26.464 CXX test/cpp_headers/trace.o 00:03:26.464 CXX test/cpp_headers/trace_parser.o 00:03:26.464 LINK thread 00:03:26.464 CXX test/cpp_headers/tree.o 00:03:26.464 CXX test/cpp_headers/ublk.o 00:03:26.464 CXX test/cpp_headers/util.o 00:03:26.464 CXX test/cpp_headers/uuid.o 00:03:26.464 CXX test/cpp_headers/version.o 00:03:26.464 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.464 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.464 CXX test/cpp_headers/vhost.o 00:03:26.464 CXX test/cpp_headers/vmd.o 00:03:26.464 CXX test/cpp_headers/xor.o 00:03:26.464 CXX test/cpp_headers/zipf.o 00:03:26.464 LINK arbitration 00:03:26.464 LINK reconnect 00:03:26.728 LINK nvmf 00:03:26.728 LINK idxd_perf 00:03:26.728 LINK dif 00:03:26.728 LINK bdevio 00:03:26.728 LINK spdk_dd 00:03:26.728 LINK abort 00:03:26.728 LINK test_dma 00:03:26.728 LINK pci_ut 00:03:26.728 LINK nvme_manage 00:03:26.728 LINK nvme_compliance 00:03:26.728 LINK accel_perf 00:03:26.728 LINK nvme_fuzz 00:03:26.728 LINK blobcli 00:03:26.728 LINK llvm_vfio_fuzz 00:03:26.987 LINK mem_callbacks 00:03:26.987 LINK spdk_bdev 00:03:26.987 LINK spdk_nvme_identify 00:03:26.987 1 warning generated. 00:03:26.987 LINK vhost_fuzz 00:03:26.987 LINK spdk_nvme 00:03:26.987 LINK memory_ut 00:03:26.987 LINK spdk_nvme_perf 00:03:27.246 LINK bdevperf 00:03:27.246 LINK cuse 00:03:27.246 LINK spdk_top 00:03:27.246 LINK llvm_nvme_fuzz 00:03:27.504 LINK spdk_lock 00:03:27.762 LINK iscsi_fuzz 00:03:30.299 LINK esnap 00:03:30.299 00:03:30.299 real 0m24.529s 00:03:30.299 user 4m45.194s 00:03:30.299 sys 1m54.405s 00:03:30.299 02:44:21 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:30.299 02:44:21 make -- common/autotest_common.sh@10 -- $ set +x 00:03:30.299 ************************************ 00:03:30.299 END TEST make 00:03:30.299 ************************************ 00:03:30.299 02:44:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.299 02:44:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.299 02:44:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.299 02:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.300 02:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.300 02:44:21 -- pm/common@44 -- $ pid=3355767 00:03:30.300 02:44:21 -- pm/common@50 -- $ kill -TERM 3355767 00:03:30.300 02:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.300 02:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.300 02:44:21 -- pm/common@44 -- $ pid=3355769 00:03:30.300 02:44:21 -- pm/common@50 -- $ kill -TERM 3355769 00:03:30.300 02:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.300 02:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:30.300 02:44:21 -- pm/common@44 -- $ pid=3355771 00:03:30.300 02:44:21 -- pm/common@50 -- $ kill -TERM 3355771 00:03:30.300 02:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.300 02:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:30.300 02:44:21 -- pm/common@44 -- $ pid=3355800 00:03:30.300 02:44:21 -- pm/common@50 -- $ sudo -E kill -TERM 3355800 00:03:30.559 02:44:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:30.559 02:44:21 -- nvmf/common.sh@7 -- # uname -s 00:03:30.559 02:44:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:30.559 02:44:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:30.559 02:44:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:30.559 02:44:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:30.559 02:44:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:30.559 02:44:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:30.559 02:44:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:30.559 02:44:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:30.559 02:44:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:30.559 02:44:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:30.559 02:44:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:30.559 02:44:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:30.559 02:44:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:30.559 02:44:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:30.559 02:44:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:30.559 02:44:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:30.559 02:44:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:30.559 02:44:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.559 02:44:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.559 02:44:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.559 02:44:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.559 02:44:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.559 02:44:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.559 02:44:21 -- paths/export.sh@5 -- # export PATH 00:03:30.559 02:44:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.559 02:44:21 -- nvmf/common.sh@47 -- # : 0 00:03:30.559 02:44:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:30.559 02:44:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:30.559 02:44:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.559 02:44:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.559 02:44:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.559 02:44:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:30.559 02:44:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:30.559 02:44:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:30.560 02:44:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:30.560 02:44:21 -- spdk/autotest.sh@32 -- # uname -s 00:03:30.560 02:44:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:30.560 02:44:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:30.560 02:44:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:30.560 02:44:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:30.560 02:44:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:30.560 02:44:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:30.560 02:44:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.560 02:44:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.560 02:44:21 -- spdk/autotest.sh@48 -- # udevadm_pid=3431981 00:03:30.560 02:44:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.560 02:44:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.560 02:44:21 -- pm/common@17 -- # local monitor 00:03:30.560 02:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.560 02:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.560 02:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.560 02:44:21 -- pm/common@21 -- # date +%s 00:03:30.560 02:44:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.560 02:44:21 -- pm/common@21 -- # date +%s 00:03:30.560 02:44:21 -- pm/common@25 -- # sleep 1 00:03:30.560 02:44:21 -- pm/common@21 -- # date +%s 00:03:30.560 02:44:21 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561061 00:03:30.560 02:44:21 -- pm/common@21 -- # date +%s 00:03:30.560 02:44:21 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561061 00:03:30.560 02:44:21 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561061 00:03:30.560 02:44:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561061 00:03:30.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561061_collect-vmstat.pm.log 00:03:30.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561061_collect-cpu-load.pm.log 00:03:30.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561061_collect-cpu-temp.pm.log 00:03:30.819 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715561061_collect-bmc-pm.bmc.pm.log 00:03:31.757 02:44:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.757 02:44:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.757 02:44:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:31.757 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.757 02:44:22 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.757 02:44:22 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:31.757 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.757 02:44:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:31.757 02:44:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:31.757 02:44:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:31.757 02:44:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:31.757 02:44:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:31.757 02:44:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.757 02:44:22 -- common/autotest_common.sh@1451 -- # uname 00:03:31.757 02:44:22 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:31.757 02:44:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.757 02:44:22 -- common/autotest_common.sh@1471 -- # uname 00:03:31.757 02:44:22 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:31.757 02:44:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:31.757 02:44:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:31.757 02:44:22 -- spdk/autotest.sh@72 -- # hash lcov 00:03:31.757 02:44:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:31.757 02:44:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:31.757 02:44:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:31.757 02:44:22 -- common/autotest_common.sh@10 -- # set +x 00:03:31.757 02:44:22 -- spdk/autotest.sh@91 -- # rm -f 00:03:31.757 02:44:22 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.048 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:35.048 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:35.048 02:44:25 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:35.048 02:44:25 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:35.048 02:44:25 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:35.048 02:44:25 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:35.048 02:44:25 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:35.048 02:44:25 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:35.048 02:44:25 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:35.048 02:44:25 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.048 02:44:25 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:35.048 02:44:25 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:35.048 02:44:25 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.048 02:44:25 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.048 02:44:25 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:35.048 02:44:25 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:35.048 02:44:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.048 No valid GPT data, bailing 00:03:35.048 02:44:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.048 02:44:25 -- scripts/common.sh@391 -- # pt= 00:03:35.048 02:44:25 -- scripts/common.sh@392 -- # return 1 00:03:35.048 02:44:25 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.048 1+0 records in 00:03:35.048 1+0 records out 00:03:35.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00239579 s, 438 MB/s 00:03:35.048 02:44:25 -- spdk/autotest.sh@118 -- # sync 00:03:35.048 02:44:25 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.048 02:44:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.048 02:44:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.163 02:44:32 -- spdk/autotest.sh@124 -- # uname -s 00:03:43.163 02:44:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:43.163 02:44:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.163 02:44:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.163 02:44:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.163 02:44:32 -- common/autotest_common.sh@10 -- # set +x 00:03:43.163 ************************************ 00:03:43.163 START TEST setup.sh 00:03:43.163 ************************************ 00:03:43.163 02:44:32 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.163 * Looking for test storage... 00:03:43.163 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:43.163 02:44:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:43.163 02:44:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:43.163 02:44:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:43.163 02:44:33 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.163 02:44:33 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.163 02:44:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.163 ************************************ 00:03:43.163 START TEST acl 00:03:43.163 ************************************ 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:43.163 * Looking for test storage... 00:03:43.163 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:43.163 02:44:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.163 02:44:33 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:43.163 02:44:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:43.163 02:44:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:43.163 02:44:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:43.163 02:44:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:43.163 02:44:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:43.163 02:44:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.163 02:44:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.452 02:44:36 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:46.452 02:44:36 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:46.452 02:44:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.452 02:44:36 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:46.452 02:44:36 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.452 02:44:36 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:49.806 Hugepages 00:03:49.806 node hugesize free / total 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 00:03:49.806 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:49.806 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:49.807 02:44:40 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:49.807 02:44:40 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.807 02:44:40 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.807 02:44:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:49.807 ************************************ 00:03:49.807 START TEST denied 00:03:49.807 ************************************ 00:03:49.807 02:44:40 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:49.807 02:44:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:49.807 02:44:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:49.807 02:44:40 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:49.807 02:44:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.807 02:44:40 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.093 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:53.093 02:44:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.094 02:44:43 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.362 00:03:58.362 real 0m8.183s 00:03:58.362 user 0m2.603s 00:03:58.362 sys 0m4.939s 00:03:58.362 02:44:48 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:58.362 02:44:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:58.362 ************************************ 00:03:58.362 END TEST denied 00:03:58.362 ************************************ 00:03:58.362 02:44:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.362 02:44:48 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:58.362 02:44:48 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:58.362 02:44:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.362 ************************************ 00:03:58.362 START TEST allowed 00:03:58.362 ************************************ 00:03:58.362 02:44:48 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:58.362 02:44:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:58.362 02:44:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:58.362 02:44:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:58.362 02:44:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.362 02:44:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:03.628 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.628 02:44:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:03.628 02:44:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:03.628 02:44:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:03.628 02:44:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.628 02:44:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.914 00:04:06.914 real 0m8.437s 00:04:06.914 user 0m2.281s 00:04:06.914 sys 0m4.675s 00:04:06.914 02:44:57 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.914 02:44:57 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:06.914 ************************************ 00:04:06.914 END TEST allowed 00:04:06.914 ************************************ 00:04:06.914 00:04:06.914 real 0m24.034s 00:04:06.914 user 0m7.437s 00:04:06.914 sys 0m14.714s 00:04:06.914 02:44:57 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.914 02:44:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.914 ************************************ 00:04:06.914 END TEST acl 00:04:06.914 ************************************ 00:04:06.914 02:44:57 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.914 02:44:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.914 02:44:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.914 02:44:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.914 ************************************ 00:04:06.914 START TEST hugepages 00:04:06.914 ************************************ 00:04:06.914 02:44:57 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.914 * Looking for test storage... 00:04:06.914 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 39618932 kB' 'MemAvailable: 42188676 kB' 'Buffers: 12388 kB' 'Cached: 12214164 kB' 'SwapCached: 28240 kB' 'Active: 10061480 kB' 'Inactive: 2757620 kB' 'Active(anon): 9462296 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567584 kB' 'Mapped: 181364 kB' 'Shmem: 9308720 kB' 'KReclaimable: 298100 kB' 'Slab: 890384 kB' 'SReclaimable: 298100 kB' 'SUnreclaim: 592284 kB' 'KernelStack: 21920 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 11253756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.914 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.915 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.916 02:44:57 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:06.916 02:44:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.916 02:44:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.916 02:44:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.916 ************************************ 00:04:06.916 START TEST default_setup 00:04:06.916 ************************************ 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.916 02:44:57 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:10.203 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.203 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:10.204 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.578 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41738076 kB' 'MemAvailable: 44307856 kB' 'Buffers: 12388 kB' 'Cached: 12214288 kB' 'SwapCached: 28240 kB' 'Active: 10086796 kB' 'Inactive: 2757620 kB' 'Active(anon): 9487612 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592728 kB' 'Mapped: 182732 kB' 'Shmem: 9308844 kB' 'KReclaimable: 298172 kB' 'Slab: 888388 kB' 'SReclaimable: 298172 kB' 'SUnreclaim: 590216 kB' 'KernelStack: 22144 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11281340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215944 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.841 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.842 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41739924 kB' 'MemAvailable: 44309704 kB' 'Buffers: 12388 kB' 'Cached: 12214292 kB' 'SwapCached: 28240 kB' 'Active: 10086900 kB' 'Inactive: 2757620 kB' 'Active(anon): 9487716 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592688 kB' 'Mapped: 182700 kB' 'Shmem: 9308848 kB' 'KReclaimable: 298172 kB' 'Slab: 888316 kB' 'SReclaimable: 298172 kB' 'SUnreclaim: 590144 kB' 'KernelStack: 22304 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11279964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215976 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.843 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.844 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41739800 kB' 'MemAvailable: 44309580 kB' 'Buffers: 12388 kB' 'Cached: 12214304 kB' 'SwapCached: 28240 kB' 'Active: 10087044 kB' 'Inactive: 2757620 kB' 'Active(anon): 9487860 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593256 kB' 'Mapped: 182648 kB' 'Shmem: 9308860 kB' 'KReclaimable: 298172 kB' 'Slab: 888272 kB' 'SReclaimable: 298172 kB' 'SUnreclaim: 590100 kB' 'KernelStack: 22432 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11290396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216120 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.845 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.846 nr_hugepages=1024 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.846 resv_hugepages=0 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.846 surplus_hugepages=0 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.846 anon_hugepages=0 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.846 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41741184 kB' 'MemAvailable: 44310964 kB' 'Buffers: 12388 kB' 'Cached: 12214320 kB' 'SwapCached: 28240 kB' 'Active: 10086476 kB' 'Inactive: 2757620 kB' 'Active(anon): 9487292 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592612 kB' 'Mapped: 182648 kB' 'Shmem: 9308876 kB' 'KReclaimable: 298172 kB' 'Slab: 888272 kB' 'SReclaimable: 298172 kB' 'SUnreclaim: 590100 kB' 'KernelStack: 22384 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11281772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216072 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.847 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20252200 kB' 'MemUsed: 12386940 kB' 'SwapCached: 25600 kB' 'Active: 6952360 kB' 'Inactive: 1542512 kB' 'Active(anon): 6760640 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135556 kB' 'Mapped: 116080 kB' 'AnonPages: 363068 kB' 'Shmem: 6807732 kB' 'KernelStack: 12584 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181576 kB' 'Slab: 508676 kB' 'SReclaimable: 181576 kB' 'SUnreclaim: 327100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.848 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.849 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.850 node0=1024 expecting 1024 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.850 00:04:11.850 real 0m5.154s 00:04:11.850 user 0m1.277s 00:04:11.850 sys 0m2.325s 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.850 02:45:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:11.850 ************************************ 00:04:11.850 END TEST default_setup 00:04:11.850 ************************************ 00:04:11.850 02:45:02 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:11.850 02:45:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.850 02:45:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.850 02:45:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.850 ************************************ 00:04:11.850 START TEST per_node_1G_alloc 00:04:11.850 ************************************ 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.850 02:45:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:15.138 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.138 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41795076 kB' 'MemAvailable: 44364820 kB' 'Buffers: 12388 kB' 'Cached: 12214452 kB' 'SwapCached: 28240 kB' 'Active: 10083208 kB' 'Inactive: 2757620 kB' 'Active(anon): 9484024 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589148 kB' 'Mapped: 181772 kB' 'Shmem: 9309008 kB' 'KReclaimable: 298100 kB' 'Slab: 888392 kB' 'SReclaimable: 298100 kB' 'SUnreclaim: 590292 kB' 'KernelStack: 22576 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11275780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216244 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.138 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41795028 kB' 'MemAvailable: 44364772 kB' 'Buffers: 12388 kB' 'Cached: 12214452 kB' 'SwapCached: 28240 kB' 'Active: 10082364 kB' 'Inactive: 2757620 kB' 'Active(anon): 9483180 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588296 kB' 'Mapped: 181740 kB' 'Shmem: 9309008 kB' 'KReclaimable: 298100 kB' 'Slab: 888264 kB' 'SReclaimable: 298100 kB' 'SUnreclaim: 590164 kB' 'KernelStack: 22384 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11272484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216084 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.139 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.140 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41798304 kB' 'MemAvailable: 44368048 kB' 'Buffers: 12388 kB' 'Cached: 12214452 kB' 'SwapCached: 28240 kB' 'Active: 10081324 kB' 'Inactive: 2757620 kB' 'Active(anon): 9482140 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587208 kB' 'Mapped: 181740 kB' 'Shmem: 9309008 kB' 'KReclaimable: 298100 kB' 'Slab: 888276 kB' 'SReclaimable: 298100 kB' 'SUnreclaim: 590176 kB' 'KernelStack: 22208 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11271860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216036 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.141 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.142 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.143 nr_hugepages=1024 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.143 resv_hugepages=0 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.143 surplus_hugepages=0 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.143 anon_hugepages=0 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.143 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41797648 kB' 'MemAvailable: 44367392 kB' 'Buffers: 12388 kB' 'Cached: 12214480 kB' 'SwapCached: 28240 kB' 'Active: 10081180 kB' 'Inactive: 2757620 kB' 'Active(anon): 9481996 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587016 kB' 'Mapped: 181740 kB' 'Shmem: 9309036 kB' 'KReclaimable: 298100 kB' 'Slab: 888504 kB' 'SReclaimable: 298100 kB' 'SUnreclaim: 590404 kB' 'KernelStack: 21952 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11272540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216020 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.144 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.405 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.406 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21302836 kB' 'MemUsed: 11336304 kB' 'SwapCached: 25600 kB' 'Active: 6947968 kB' 'Inactive: 1542512 kB' 'Active(anon): 6756248 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135684 kB' 'Mapped: 116080 kB' 'AnonPages: 358000 kB' 'Shmem: 6807860 kB' 'KernelStack: 12584 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181512 kB' 'Slab: 508968 kB' 'SReclaimable: 181512 kB' 'SUnreclaim: 327456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.407 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.408 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20495984 kB' 'MemUsed: 7160096 kB' 'SwapCached: 2640 kB' 'Active: 3132200 kB' 'Inactive: 1215108 kB' 'Active(anon): 2724736 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407464 kB' 'Inactive(file): 1208144 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4119440 kB' 'Mapped: 65648 kB' 'AnonPages: 227984 kB' 'Shmem: 2501192 kB' 'KernelStack: 9352 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116588 kB' 'Slab: 379520 kB' 'SReclaimable: 116588 kB' 'SUnreclaim: 262932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.409 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.410 node0=512 expecting 512 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:15.410 node1=512 expecting 512 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:15.410 00:04:15.410 real 0m3.387s 00:04:15.410 user 0m1.180s 00:04:15.410 sys 0m2.174s 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.410 02:45:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.410 ************************************ 00:04:15.410 END TEST per_node_1G_alloc 00:04:15.410 ************************************ 00:04:15.410 02:45:06 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:15.410 02:45:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.410 02:45:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.410 02:45:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.410 ************************************ 00:04:15.410 START TEST even_2G_alloc 00:04:15.410 ************************************ 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.410 02:45:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:18.705 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.705 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.706 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41829516 kB' 'MemAvailable: 44399244 kB' 'Buffers: 12388 kB' 'Cached: 12214600 kB' 'SwapCached: 28240 kB' 'Active: 10078780 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479596 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584436 kB' 'Mapped: 180636 kB' 'Shmem: 9309156 kB' 'KReclaimable: 298068 kB' 'Slab: 887376 kB' 'SReclaimable: 298068 kB' 'SUnreclaim: 589308 kB' 'KernelStack: 22256 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11260996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216196 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.706 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41837508 kB' 'MemAvailable: 44407236 kB' 'Buffers: 12388 kB' 'Cached: 12214604 kB' 'SwapCached: 28240 kB' 'Active: 10078148 kB' 'Inactive: 2757620 kB' 'Active(anon): 9478964 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583768 kB' 'Mapped: 180672 kB' 'Shmem: 9309160 kB' 'KReclaimable: 298068 kB' 'Slab: 887316 kB' 'SReclaimable: 298068 kB' 'SUnreclaim: 589248 kB' 'KernelStack: 21888 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11259888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216036 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.707 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.708 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41837636 kB' 'MemAvailable: 44407364 kB' 'Buffers: 12388 kB' 'Cached: 12214624 kB' 'SwapCached: 28240 kB' 'Active: 10078084 kB' 'Inactive: 2757620 kB' 'Active(anon): 9478900 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583716 kB' 'Mapped: 180660 kB' 'Shmem: 9309180 kB' 'KReclaimable: 298068 kB' 'Slab: 887316 kB' 'SReclaimable: 298068 kB' 'SUnreclaim: 589248 kB' 'KernelStack: 21984 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11259908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216036 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.709 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.710 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.711 nr_hugepages=1024 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.711 resv_hugepages=0 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.711 surplus_hugepages=0 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.711 anon_hugepages=0 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41837388 kB' 'MemAvailable: 44407116 kB' 'Buffers: 12388 kB' 'Cached: 12214644 kB' 'SwapCached: 28240 kB' 'Active: 10078400 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479216 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584024 kB' 'Mapped: 180660 kB' 'Shmem: 9309200 kB' 'KReclaimable: 298068 kB' 'Slab: 887316 kB' 'SReclaimable: 298068 kB' 'SUnreclaim: 589248 kB' 'KernelStack: 21984 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11259932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216052 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.711 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.712 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21316744 kB' 'MemUsed: 11322396 kB' 'SwapCached: 25600 kB' 'Active: 6945440 kB' 'Inactive: 1542512 kB' 'Active(anon): 6753720 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135828 kB' 'Mapped: 114936 kB' 'AnonPages: 355240 kB' 'Shmem: 6808004 kB' 'KernelStack: 12552 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181480 kB' 'Slab: 508188 kB' 'SReclaimable: 181480 kB' 'SUnreclaim: 326708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.713 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.714 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20520872 kB' 'MemUsed: 7135208 kB' 'SwapCached: 2640 kB' 'Active: 3132660 kB' 'Inactive: 1215108 kB' 'Active(anon): 2725196 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407464 kB' 'Inactive(file): 1208144 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4119464 kB' 'Mapped: 65724 kB' 'AnonPages: 228420 kB' 'Shmem: 2501216 kB' 'KernelStack: 9432 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116588 kB' 'Slab: 379128 kB' 'SReclaimable: 116588 kB' 'SUnreclaim: 262540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.715 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.716 node0=512 expecting 512 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.716 node1=512 expecting 512 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.716 00:04:18.716 real 0m3.079s 00:04:18.716 user 0m1.058s 00:04:18.716 sys 0m2.013s 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.716 02:45:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.716 ************************************ 00:04:18.716 END TEST even_2G_alloc 00:04:18.716 ************************************ 00:04:18.716 02:45:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:18.716 02:45:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.716 02:45:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.716 02:45:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.716 ************************************ 00:04:18.716 START TEST odd_alloc 00:04:18.716 ************************************ 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.717 02:45:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:22.010 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.010 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41876564 kB' 'MemAvailable: 44446264 kB' 'Buffers: 12388 kB' 'Cached: 12214776 kB' 'SwapCached: 28240 kB' 'Active: 10078196 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479012 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583324 kB' 'Mapped: 180708 kB' 'Shmem: 9309332 kB' 'KReclaimable: 298012 kB' 'Slab: 886868 kB' 'SReclaimable: 298012 kB' 'SUnreclaim: 588856 kB' 'KernelStack: 21952 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11260400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.010 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.011 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41877004 kB' 'MemAvailable: 44446704 kB' 'Buffers: 12388 kB' 'Cached: 12214780 kB' 'SwapCached: 28240 kB' 'Active: 10078260 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479076 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583396 kB' 'Mapped: 180676 kB' 'Shmem: 9309336 kB' 'KReclaimable: 298012 kB' 'Slab: 886868 kB' 'SReclaimable: 298012 kB' 'SUnreclaim: 588856 kB' 'KernelStack: 21936 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11260416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.012 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41876284 kB' 'MemAvailable: 44445984 kB' 'Buffers: 12388 kB' 'Cached: 12214780 kB' 'SwapCached: 28240 kB' 'Active: 10077468 kB' 'Inactive: 2757620 kB' 'Active(anon): 9478284 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583072 kB' 'Mapped: 180600 kB' 'Shmem: 9309336 kB' 'KReclaimable: 298012 kB' 'Slab: 886824 kB' 'SReclaimable: 298012 kB' 'SUnreclaim: 588812 kB' 'KernelStack: 21936 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11260436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215940 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.013 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.014 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:22.015 nr_hugepages=1025 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.015 resv_hugepages=0 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.015 surplus_hugepages=0 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.015 anon_hugepages=0 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41876284 kB' 'MemAvailable: 44445984 kB' 'Buffers: 12388 kB' 'Cached: 12214820 kB' 'SwapCached: 28240 kB' 'Active: 10077408 kB' 'Inactive: 2757620 kB' 'Active(anon): 9478224 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582908 kB' 'Mapped: 180600 kB' 'Shmem: 9309376 kB' 'KReclaimable: 298012 kB' 'Slab: 886824 kB' 'SReclaimable: 298012 kB' 'SUnreclaim: 588812 kB' 'KernelStack: 21936 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11260456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215940 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.015 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.016 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21342264 kB' 'MemUsed: 11296876 kB' 'SwapCached: 25600 kB' 'Active: 6945848 kB' 'Inactive: 1542512 kB' 'Active(anon): 6754128 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8135952 kB' 'Mapped: 114952 kB' 'AnonPages: 355700 kB' 'Shmem: 6808128 kB' 'KernelStack: 12552 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181424 kB' 'Slab: 507952 kB' 'SReclaimable: 181424 kB' 'SUnreclaim: 326528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.017 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.018 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20534020 kB' 'MemUsed: 7122060 kB' 'SwapCached: 2640 kB' 'Active: 3132056 kB' 'Inactive: 1215108 kB' 'Active(anon): 2724592 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407464 kB' 'Inactive(file): 1208144 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4119516 kB' 'Mapped: 65648 kB' 'AnonPages: 227768 kB' 'Shmem: 2501268 kB' 'KernelStack: 9400 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116588 kB' 'Slab: 378872 kB' 'SReclaimable: 116588 kB' 'SUnreclaim: 262284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.019 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.020 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:22.020 node0=512 expecting 513 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:22.279 node1=513 expecting 512 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:22.279 00:04:22.279 real 0m3.579s 00:04:22.279 user 0m1.253s 00:04:22.279 sys 0m2.355s 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.279 02:45:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 ************************************ 00:04:22.279 END TEST odd_alloc 00:04:22.279 ************************************ 00:04:22.279 02:45:12 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:22.279 02:45:12 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.279 02:45:12 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.279 02:45:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.279 ************************************ 00:04:22.279 START TEST custom_alloc 00:04:22.279 ************************************ 00:04:22.279 02:45:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:22.279 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:22.279 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:22.279 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:22.279 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:22.279 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.280 02:45:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:25.575 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.575 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40887516 kB' 'MemAvailable: 43457200 kB' 'Buffers: 12388 kB' 'Cached: 12214940 kB' 'SwapCached: 28240 kB' 'Active: 10078624 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479440 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583980 kB' 'Mapped: 180648 kB' 'Shmem: 9309496 kB' 'KReclaimable: 297980 kB' 'Slab: 886612 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 588632 kB' 'KernelStack: 21984 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11261236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.575 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.576 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40888272 kB' 'MemAvailable: 43457956 kB' 'Buffers: 12388 kB' 'Cached: 12214960 kB' 'SwapCached: 28240 kB' 'Active: 10078216 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479032 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583512 kB' 'Mapped: 180620 kB' 'Shmem: 9309516 kB' 'KReclaimable: 297980 kB' 'Slab: 886628 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 588648 kB' 'KernelStack: 21952 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11262648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215876 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.577 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.578 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40887612 kB' 'MemAvailable: 43457296 kB' 'Buffers: 12388 kB' 'Cached: 12214960 kB' 'SwapCached: 28240 kB' 'Active: 10078512 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479328 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583828 kB' 'Mapped: 180620 kB' 'Shmem: 9309516 kB' 'KReclaimable: 297980 kB' 'Slab: 886628 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 588648 kB' 'KernelStack: 21936 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11262416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215876 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.579 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.580 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:25.581 nr_hugepages=1536 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.581 resv_hugepages=0 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.581 surplus_hugepages=0 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.581 anon_hugepages=0 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40887964 kB' 'MemAvailable: 43457648 kB' 'Buffers: 12388 kB' 'Cached: 12214960 kB' 'SwapCached: 28240 kB' 'Active: 10078688 kB' 'Inactive: 2757620 kB' 'Active(anon): 9479504 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583992 kB' 'Mapped: 180620 kB' 'Shmem: 9309516 kB' 'KReclaimable: 297980 kB' 'Slab: 886628 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 588648 kB' 'KernelStack: 21920 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11262808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215860 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.581 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.582 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21379116 kB' 'MemUsed: 11260024 kB' 'SwapCached: 25600 kB' 'Active: 6945028 kB' 'Inactive: 1542512 kB' 'Active(anon): 6753308 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8136012 kB' 'Mapped: 114972 kB' 'AnonPages: 354624 kB' 'Shmem: 6808188 kB' 'KernelStack: 12536 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181392 kB' 'Slab: 507720 kB' 'SReclaimable: 181392 kB' 'SUnreclaim: 326328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.583 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 19510200 kB' 'MemUsed: 8145880 kB' 'SwapCached: 2640 kB' 'Active: 3133488 kB' 'Inactive: 1215108 kB' 'Active(anon): 2726024 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407464 kB' 'Inactive(file): 1208144 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4119640 kB' 'Mapped: 65648 kB' 'AnonPages: 229156 kB' 'Shmem: 2501392 kB' 'KernelStack: 9416 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 116588 kB' 'Slab: 378908 kB' 'SReclaimable: 116588 kB' 'SUnreclaim: 262320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.584 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.585 node0=512 expecting 512 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:25.585 node1=1024 expecting 1024 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:25.585 00:04:25.585 real 0m3.421s 00:04:25.585 user 0m1.227s 00:04:25.585 sys 0m2.229s 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.585 02:45:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.585 ************************************ 00:04:25.585 END TEST custom_alloc 00:04:25.585 ************************************ 00:04:25.585 02:45:16 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:25.585 02:45:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.585 02:45:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.585 02:45:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.844 ************************************ 00:04:25.844 START TEST no_shrink_alloc 00:04:25.844 ************************************ 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.844 02:45:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:29.136 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.136 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41936544 kB' 'MemAvailable: 44506228 kB' 'Buffers: 12388 kB' 'Cached: 12215100 kB' 'SwapCached: 28240 kB' 'Active: 10080724 kB' 'Inactive: 2757620 kB' 'Active(anon): 9481540 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585448 kB' 'Mapped: 180712 kB' 'Shmem: 9309656 kB' 'KReclaimable: 297980 kB' 'Slab: 887272 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589292 kB' 'KernelStack: 22016 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11264872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216036 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.136 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.137 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41937752 kB' 'MemAvailable: 44507436 kB' 'Buffers: 12388 kB' 'Cached: 12215104 kB' 'SwapCached: 28240 kB' 'Active: 10079500 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480316 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584656 kB' 'Mapped: 180636 kB' 'Shmem: 9309660 kB' 'KReclaimable: 297980 kB' 'Slab: 887248 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589268 kB' 'KernelStack: 22048 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11263748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215972 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.138 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.139 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41936768 kB' 'MemAvailable: 44506452 kB' 'Buffers: 12388 kB' 'Cached: 12215120 kB' 'SwapCached: 28240 kB' 'Active: 10079808 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480624 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585020 kB' 'Mapped: 180636 kB' 'Shmem: 9309676 kB' 'KReclaimable: 297980 kB' 'Slab: 887248 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589268 kB' 'KernelStack: 22144 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11267204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216100 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.140 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.141 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.142 nr_hugepages=1024 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.142 resv_hugepages=0 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.142 surplus_hugepages=0 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.142 anon_hugepages=0 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41937412 kB' 'MemAvailable: 44507096 kB' 'Buffers: 12388 kB' 'Cached: 12215144 kB' 'SwapCached: 28240 kB' 'Active: 10079572 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480388 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584700 kB' 'Mapped: 180636 kB' 'Shmem: 9309700 kB' 'KReclaimable: 297980 kB' 'Slab: 887248 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589268 kB' 'KernelStack: 22016 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11264936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216036 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.142 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.143 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20333400 kB' 'MemUsed: 12305740 kB' 'SwapCached: 25600 kB' 'Active: 6945420 kB' 'Inactive: 1542512 kB' 'Active(anon): 6753700 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8136020 kB' 'Mapped: 114976 kB' 'AnonPages: 355048 kB' 'Shmem: 6808196 kB' 'KernelStack: 12632 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181392 kB' 'Slab: 508336 kB' 'SReclaimable: 181392 kB' 'SUnreclaim: 326944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.144 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.145 node0=1024 expecting 1024 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.145 02:45:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:32.485 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.485 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.485 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41955620 kB' 'MemAvailable: 44525304 kB' 'Buffers: 12388 kB' 'Cached: 12215244 kB' 'SwapCached: 28240 kB' 'Active: 10080064 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480880 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584604 kB' 'Mapped: 180740 kB' 'Shmem: 9309800 kB' 'KReclaimable: 297980 kB' 'Slab: 887160 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589180 kB' 'KernelStack: 21968 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11262988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215940 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.485 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.486 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41956588 kB' 'MemAvailable: 44526272 kB' 'Buffers: 12388 kB' 'Cached: 12215248 kB' 'SwapCached: 28240 kB' 'Active: 10079564 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480380 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584668 kB' 'Mapped: 180632 kB' 'Shmem: 9309804 kB' 'KReclaimable: 297980 kB' 'Slab: 887120 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589140 kB' 'KernelStack: 21952 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11263004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.487 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.488 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41956396 kB' 'MemAvailable: 44526080 kB' 'Buffers: 12388 kB' 'Cached: 12215268 kB' 'SwapCached: 28240 kB' 'Active: 10079592 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480408 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584672 kB' 'Mapped: 180632 kB' 'Shmem: 9309824 kB' 'KReclaimable: 297980 kB' 'Slab: 887120 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589140 kB' 'KernelStack: 21952 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11263028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.489 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.490 nr_hugepages=1024 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.490 resv_hugepages=0 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.490 surplus_hugepages=0 00:04:32.490 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.490 anon_hugepages=0 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41972424 kB' 'MemAvailable: 44542108 kB' 'Buffers: 12388 kB' 'Cached: 12215288 kB' 'SwapCached: 28240 kB' 'Active: 10080124 kB' 'Inactive: 2757620 kB' 'Active(anon): 9480940 kB' 'Inactive(anon): 438972 kB' 'Active(file): 599184 kB' 'Inactive(file): 2318648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8286204 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585172 kB' 'Mapped: 181136 kB' 'Shmem: 9309844 kB' 'KReclaimable: 297980 kB' 'Slab: 887184 kB' 'SReclaimable: 297980 kB' 'SUnreclaim: 589204 kB' 'KernelStack: 21984 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11264272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215876 kB' 'VmallocChunk: 0 kB' 'Percpu: 80640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2671988 kB' 'DirectMap2M: 48394240 kB' 'DirectMap1G: 17825792 kB' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.491 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.492 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.751 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.751 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.751 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.751 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20349004 kB' 'MemUsed: 12290136 kB' 'SwapCached: 25600 kB' 'Active: 6950492 kB' 'Inactive: 1542512 kB' 'Active(anon): 6758772 kB' 'Inactive(anon): 432008 kB' 'Active(file): 191720 kB' 'Inactive(file): 1110504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8136064 kB' 'Mapped: 114984 kB' 'AnonPages: 360204 kB' 'Shmem: 6808240 kB' 'KernelStack: 12568 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181392 kB' 'Slab: 508248 kB' 'SReclaimable: 181392 kB' 'SUnreclaim: 326856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.752 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.753 node0=1024 expecting 1024 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.753 00:04:32.753 real 0m6.881s 00:04:32.753 user 0m2.547s 00:04:32.753 sys 0m4.388s 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.753 02:45:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.753 ************************************ 00:04:32.753 END TEST no_shrink_alloc 00:04:32.753 ************************************ 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.753 02:45:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.753 00:04:32.753 real 0m26.162s 00:04:32.753 user 0m8.784s 00:04:32.753 sys 0m15.922s 00:04:32.753 02:45:23 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.754 02:45:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.754 ************************************ 00:04:32.754 END TEST hugepages 00:04:32.754 ************************************ 00:04:32.754 02:45:23 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:32.754 02:45:23 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.754 02:45:23 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.754 02:45:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.754 ************************************ 00:04:32.754 START TEST driver 00:04:32.754 ************************************ 00:04:32.754 02:45:23 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:32.754 * Looking for test storage... 00:04:32.754 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:32.754 02:45:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:32.754 02:45:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.754 02:45:23 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.021 02:45:28 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:38.021 02:45:28 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.021 02:45:28 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.021 02:45:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.021 ************************************ 00:04:38.021 START TEST guess_driver 00:04:38.021 ************************************ 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:38.021 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:38.021 Looking for driver=vfio-pci 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.021 02:45:28 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.550 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.551 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.809 02:45:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.711 02:45:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.978 00:04:47.978 real 0m9.773s 00:04:47.978 user 0m2.515s 00:04:47.978 sys 0m4.966s 00:04:47.978 02:45:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.978 02:45:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.978 ************************************ 00:04:47.978 END TEST guess_driver 00:04:47.978 ************************************ 00:04:47.978 00:04:47.978 real 0m14.436s 00:04:47.978 user 0m3.750s 00:04:47.978 sys 0m7.578s 00:04:47.978 02:45:37 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.978 02:45:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.978 ************************************ 00:04:47.978 END TEST driver 00:04:47.978 ************************************ 00:04:47.978 02:45:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:47.978 02:45:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.978 02:45:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.978 02:45:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.978 ************************************ 00:04:47.978 START TEST devices 00:04:47.978 ************************************ 00:04:47.978 02:45:37 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:47.978 * Looking for test storage... 00:04:47.978 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:47.978 02:45:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:47.978 02:45:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:47.978 02:45:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.978 02:45:38 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.264 02:45:41 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:51.264 02:45:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:51.265 02:45:41 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:51.265 02:45:41 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:51.265 No valid GPT data, bailing 00:04:51.265 02:45:41 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:51.265 02:45:41 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:51.265 02:45:41 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:51.265 02:45:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:51.265 02:45:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:51.265 02:45:41 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:51.265 02:45:41 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:51.265 02:45:41 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.265 02:45:41 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.265 02:45:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.265 ************************************ 00:04:51.265 START TEST nvme_mount 00:04:51.265 ************************************ 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:51.265 02:45:41 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:52.202 Creating new GPT entries in memory. 00:04:52.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.202 other utilities. 00:04:52.202 02:45:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.202 02:45:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.202 02:45:42 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.202 02:45:42 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.202 02:45:42 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:53.138 Creating new GPT entries in memory. 00:04:53.138 The operation has completed successfully. 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3462199 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.138 02:45:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.423 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.423 02:45:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.423 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:56.423 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:56.423 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.423 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.423 02:45:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.709 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.967 02:45:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.248 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.248 00:05:03.248 real 0m12.211s 00:05:03.248 user 0m3.439s 00:05:03.248 sys 0m6.652s 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.248 02:45:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:03.248 ************************************ 00:05:03.248 END TEST nvme_mount 00:05:03.248 ************************************ 00:05:03.248 02:45:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.248 02:45:53 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.248 02:45:53 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.248 02:45:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.248 ************************************ 00:05:03.248 START TEST dm_mount 00:05:03.248 ************************************ 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.248 02:45:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:04.183 Creating new GPT entries in memory. 00:05:04.183 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.183 other utilities. 00:05:04.183 02:45:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.183 02:45:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.183 02:45:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.183 02:45:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.183 02:45:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:05.561 Creating new GPT entries in memory. 00:05:05.561 The operation has completed successfully. 00:05:05.561 02:45:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.561 02:45:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.561 02:45:56 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.561 02:45:56 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.561 02:45:56 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:06.497 The operation has completed successfully. 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3466621 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.497 02:45:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.029 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.287 02:45:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.571 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:12.830 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.830 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:12.830 02:46:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:12.830 00:05:12.830 real 0m9.422s 00:05:12.830 user 0m2.222s 00:05:12.830 sys 0m4.225s 00:05:12.830 02:46:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.830 02:46:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:12.830 ************************************ 00:05:12.830 END TEST dm_mount 00:05:12.830 ************************************ 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.830 02:46:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.098 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:13.098 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:13.098 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.098 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.098 02:46:03 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:13.098 00:05:13.098 real 0m25.768s 00:05:13.098 user 0m7.064s 00:05:13.098 sys 0m13.510s 00:05:13.098 02:46:03 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.098 02:46:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:13.098 ************************************ 00:05:13.098 END TEST devices 00:05:13.098 ************************************ 00:05:13.098 00:05:13.098 real 1m30.872s 00:05:13.098 user 0m27.211s 00:05:13.098 sys 0m52.036s 00:05:13.098 02:46:03 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.098 02:46:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.098 ************************************ 00:05:13.098 END TEST setup.sh 00:05:13.098 ************************************ 00:05:13.098 02:46:03 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:15.701 Hugepages 00:05:15.701 node hugesize free / total 00:05:15.701 node0 1048576kB 0 / 0 00:05:15.701 node0 2048kB 2048 / 2048 00:05:15.701 node1 1048576kB 0 / 0 00:05:15.701 node1 2048kB 0 / 0 00:05:15.701 00:05:15.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.701 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:15.701 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:15.701 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:15.701 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:15.701 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:15.959 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:15.959 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:15.959 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:15.959 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:15.959 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:15.959 02:46:06 -- spdk/autotest.sh@130 -- # uname -s 00:05:15.959 02:46:06 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:15.959 02:46:06 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:15.959 02:46:06 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:19.240 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.240 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.612 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:20.613 02:46:11 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:21.989 02:46:12 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:21.989 02:46:12 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:21.989 02:46:12 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:21.989 02:46:12 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:21.989 02:46:12 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:21.989 02:46:12 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:21.989 02:46:12 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.989 02:46:12 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.989 02:46:12 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:21.989 02:46:12 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:21.989 02:46:12 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:21.989 02:46:12 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.272 Waiting for block devices as requested 00:05:25.272 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:25.272 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:25.272 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:25.272 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:25.530 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:25.530 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:25.530 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:25.530 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:25.789 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:25.789 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:25.789 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:26.048 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:26.048 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:26.048 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:26.307 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:26.307 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:26.307 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:26.566 02:46:17 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:26.566 02:46:17 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:05:26.566 02:46:17 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:26.566 02:46:17 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:26.566 02:46:17 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:26.566 02:46:17 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:26.566 02:46:17 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:05:26.566 02:46:17 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:26.566 02:46:17 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:26.566 02:46:17 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:26.566 02:46:17 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:26.566 02:46:17 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:26.566 02:46:17 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:26.566 02:46:17 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:26.566 02:46:17 -- common/autotest_common.sh@1553 -- # continue 00:05:26.566 02:46:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:26.566 02:46:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.566 02:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 02:46:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:26.566 02:46:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:26.566 02:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 02:46:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:29.854 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:29.854 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:31.233 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:31.491 02:46:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:31.491 02:46:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.491 02:46:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.491 02:46:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:31.491 02:46:22 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:31.491 02:46:22 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.491 02:46:22 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:31.491 02:46:22 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:31.491 02:46:22 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:31.491 02:46:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:31.491 02:46:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:31.491 02:46:22 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.491 02:46:22 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:31.491 02:46:22 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.491 02:46:22 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:31.491 02:46:22 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:31.491 02:46:22 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:31.491 02:46:22 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:31.491 02:46:22 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:31.491 02:46:22 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:31.491 02:46:22 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:31.492 02:46:22 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:05:31.492 02:46:22 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:05:31.492 02:46:22 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3475894 00:05:31.492 02:46:22 -- common/autotest_common.sh@1594 -- # waitforlisten 3475894 00:05:31.492 02:46:22 -- common/autotest_common.sh@827 -- # '[' -z 3475894 ']' 00:05:31.492 02:46:22 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.492 02:46:22 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.492 02:46:22 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.492 02:46:22 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.492 02:46:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.492 02:46:22 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.492 [2024-05-13 02:46:22.276282] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:31.492 [2024-05-13 02:46:22.276348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475894 ] 00:05:31.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.750 [2024-05-13 02:46:22.312639] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.750 [2024-05-13 02:46:22.344385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.750 [2024-05-13 02:46:22.383637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.009 02:46:22 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.009 02:46:22 -- common/autotest_common.sh@860 -- # return 0 00:05:32.009 02:46:22 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:32.009 02:46:22 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:32.009 02:46:22 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:35.293 nvme0n1 00:05:35.293 02:46:25 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:35.293 [2024-05-13 02:46:25.714886] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:35.293 request: 00:05:35.293 { 00:05:35.293 "nvme_ctrlr_name": "nvme0", 00:05:35.293 "password": "test", 00:05:35.293 "method": "bdev_nvme_opal_revert", 00:05:35.293 "req_id": 1 00:05:35.293 } 00:05:35.293 Got JSON-RPC error response 00:05:35.293 response: 00:05:35.293 { 00:05:35.293 "code": -32602, 00:05:35.293 "message": "Invalid parameters" 00:05:35.293 } 00:05:35.293 02:46:25 -- common/autotest_common.sh@1600 -- # true 00:05:35.293 02:46:25 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:35.293 02:46:25 -- common/autotest_common.sh@1604 -- # killprocess 3475894 00:05:35.293 02:46:25 -- common/autotest_common.sh@946 -- # '[' -z 3475894 ']' 00:05:35.293 02:46:25 -- common/autotest_common.sh@950 -- # kill -0 3475894 00:05:35.293 02:46:25 -- common/autotest_common.sh@951 -- # uname 00:05:35.293 02:46:25 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:35.293 02:46:25 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3475894 00:05:35.293 02:46:25 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:35.293 02:46:25 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:35.293 02:46:25 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3475894' 00:05:35.293 killing process with pid 3475894 00:05:35.293 02:46:25 -- common/autotest_common.sh@965 -- # kill 3475894 00:05:35.293 02:46:25 -- common/autotest_common.sh@970 -- # wait 3475894 00:05:37.195 02:46:27 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:37.195 02:46:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:37.195 02:46:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:37.195 02:46:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:37.195 02:46:27 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:37.195 02:46:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.195 02:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:37.195 02:46:27 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:37.195 02:46:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.195 02:46:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.195 02:46:27 -- common/autotest_common.sh@10 -- # set +x 00:05:37.195 ************************************ 00:05:37.195 START TEST env 00:05:37.195 ************************************ 00:05:37.195 02:46:27 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:37.454 * Looking for test storage... 00:05:37.454 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:37.454 02:46:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.454 02:46:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.454 02:46:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.454 02:46:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.454 ************************************ 00:05:37.454 START TEST env_memory 00:05:37.454 ************************************ 00:05:37.454 02:46:28 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.454 00:05:37.454 00:05:37.454 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.454 http://cunit.sourceforge.net/ 00:05:37.454 00:05:37.454 00:05:37.454 Suite: memory 00:05:37.454 Test: alloc and free memory map ...[2024-05-13 02:46:28.148042] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.454 passed 00:05:37.454 Test: mem map translation ...[2024-05-13 02:46:28.160597] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.454 [2024-05-13 02:46:28.160612] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.454 [2024-05-13 02:46:28.160659] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.454 [2024-05-13 02:46:28.160669] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.454 passed 00:05:37.454 Test: mem map registration ...[2024-05-13 02:46:28.180950] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.454 [2024-05-13 02:46:28.180966] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.454 passed 00:05:37.454 Test: mem map adjacent registrations ...passed 00:05:37.454 00:05:37.454 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.454 suites 1 1 n/a 0 0 00:05:37.454 tests 4 4 4 0 0 00:05:37.454 asserts 152 152 152 0 n/a 00:05:37.454 00:05:37.454 Elapsed time = 0.084 seconds 00:05:37.454 00:05:37.454 real 0m0.096s 00:05:37.454 user 0m0.080s 00:05:37.454 sys 0m0.016s 00:05:37.454 02:46:28 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.454 02:46:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.454 ************************************ 00:05:37.454 END TEST env_memory 00:05:37.454 ************************************ 00:05:37.454 02:46:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.454 02:46:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.454 02:46:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.454 02:46:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.714 ************************************ 00:05:37.714 START TEST env_vtophys 00:05:37.714 ************************************ 00:05:37.714 02:46:28 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.714 EAL: lib.eal log level changed from notice to debug 00:05:37.714 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.714 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.714 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.714 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.714 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.714 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.714 EAL: Detected lcore 6 as core 6 on socket 0 00:05:37.714 EAL: Detected lcore 7 as core 8 on socket 0 00:05:37.714 EAL: Detected lcore 8 as core 9 on socket 0 00:05:37.714 EAL: Detected lcore 9 as core 10 on socket 0 00:05:37.714 EAL: Detected lcore 10 as core 11 on socket 0 00:05:37.714 EAL: Detected lcore 11 as core 12 on socket 0 00:05:37.714 EAL: Detected lcore 12 as core 13 on socket 0 00:05:37.714 EAL: Detected lcore 13 as core 14 on socket 0 00:05:37.714 EAL: Detected lcore 14 as core 16 on socket 0 00:05:37.714 EAL: Detected lcore 15 as core 17 on socket 0 00:05:37.714 EAL: Detected lcore 16 as core 18 on socket 0 00:05:37.714 EAL: Detected lcore 17 as core 19 on socket 0 00:05:37.714 EAL: Detected lcore 18 as core 20 on socket 0 00:05:37.714 EAL: Detected lcore 19 as core 21 on socket 0 00:05:37.714 EAL: Detected lcore 20 as core 22 on socket 0 00:05:37.714 EAL: Detected lcore 21 as core 24 on socket 0 00:05:37.714 EAL: Detected lcore 22 as core 25 on socket 0 00:05:37.714 EAL: Detected lcore 23 as core 26 on socket 0 00:05:37.714 EAL: Detected lcore 24 as core 27 on socket 0 00:05:37.714 EAL: Detected lcore 25 as core 28 on socket 0 00:05:37.714 EAL: Detected lcore 26 as core 29 on socket 0 00:05:37.714 EAL: Detected lcore 27 as core 30 on socket 0 00:05:37.714 EAL: Detected lcore 28 as core 0 on socket 1 00:05:37.714 EAL: Detected lcore 29 as core 1 on socket 1 00:05:37.714 EAL: Detected lcore 30 as core 2 on socket 1 00:05:37.714 EAL: Detected lcore 31 as core 3 on socket 1 00:05:37.714 EAL: Detected lcore 32 as core 4 on socket 1 00:05:37.714 EAL: Detected lcore 33 as core 5 on socket 1 00:05:37.714 EAL: Detected lcore 34 as core 6 on socket 1 00:05:37.714 EAL: Detected lcore 35 as core 8 on socket 1 00:05:37.714 EAL: Detected lcore 36 as core 9 on socket 1 00:05:37.714 EAL: Detected lcore 37 as core 10 on socket 1 00:05:37.714 EAL: Detected lcore 38 as core 11 on socket 1 00:05:37.714 EAL: Detected lcore 39 as core 12 on socket 1 00:05:37.714 EAL: Detected lcore 40 as core 13 on socket 1 00:05:37.714 EAL: Detected lcore 41 as core 14 on socket 1 00:05:37.714 EAL: Detected lcore 42 as core 16 on socket 1 00:05:37.714 EAL: Detected lcore 43 as core 17 on socket 1 00:05:37.714 EAL: Detected lcore 44 as core 18 on socket 1 00:05:37.714 EAL: Detected lcore 45 as core 19 on socket 1 00:05:37.714 EAL: Detected lcore 46 as core 20 on socket 1 00:05:37.714 EAL: Detected lcore 47 as core 21 on socket 1 00:05:37.714 EAL: Detected lcore 48 as core 22 on socket 1 00:05:37.714 EAL: Detected lcore 49 as core 24 on socket 1 00:05:37.714 EAL: Detected lcore 50 as core 25 on socket 1 00:05:37.714 EAL: Detected lcore 51 as core 26 on socket 1 00:05:37.714 EAL: Detected lcore 52 as core 27 on socket 1 00:05:37.714 EAL: Detected lcore 53 as core 28 on socket 1 00:05:37.714 EAL: Detected lcore 54 as core 29 on socket 1 00:05:37.714 EAL: Detected lcore 55 as core 30 on socket 1 00:05:37.714 EAL: Detected lcore 56 as core 0 on socket 0 00:05:37.714 EAL: Detected lcore 57 as core 1 on socket 0 00:05:37.714 EAL: Detected lcore 58 as core 2 on socket 0 00:05:37.714 EAL: Detected lcore 59 as core 3 on socket 0 00:05:37.714 EAL: Detected lcore 60 as core 4 on socket 0 00:05:37.714 EAL: Detected lcore 61 as core 5 on socket 0 00:05:37.714 EAL: Detected lcore 62 as core 6 on socket 0 00:05:37.714 EAL: Detected lcore 63 as core 8 on socket 0 00:05:37.714 EAL: Detected lcore 64 as core 9 on socket 0 00:05:37.714 EAL: Detected lcore 65 as core 10 on socket 0 00:05:37.714 EAL: Detected lcore 66 as core 11 on socket 0 00:05:37.714 EAL: Detected lcore 67 as core 12 on socket 0 00:05:37.714 EAL: Detected lcore 68 as core 13 on socket 0 00:05:37.714 EAL: Detected lcore 69 as core 14 on socket 0 00:05:37.714 EAL: Detected lcore 70 as core 16 on socket 0 00:05:37.714 EAL: Detected lcore 71 as core 17 on socket 0 00:05:37.714 EAL: Detected lcore 72 as core 18 on socket 0 00:05:37.714 EAL: Detected lcore 73 as core 19 on socket 0 00:05:37.714 EAL: Detected lcore 74 as core 20 on socket 0 00:05:37.714 EAL: Detected lcore 75 as core 21 on socket 0 00:05:37.714 EAL: Detected lcore 76 as core 22 on socket 0 00:05:37.714 EAL: Detected lcore 77 as core 24 on socket 0 00:05:37.714 EAL: Detected lcore 78 as core 25 on socket 0 00:05:37.714 EAL: Detected lcore 79 as core 26 on socket 0 00:05:37.714 EAL: Detected lcore 80 as core 27 on socket 0 00:05:37.714 EAL: Detected lcore 81 as core 28 on socket 0 00:05:37.714 EAL: Detected lcore 82 as core 29 on socket 0 00:05:37.714 EAL: Detected lcore 83 as core 30 on socket 0 00:05:37.714 EAL: Detected lcore 84 as core 0 on socket 1 00:05:37.714 EAL: Detected lcore 85 as core 1 on socket 1 00:05:37.714 EAL: Detected lcore 86 as core 2 on socket 1 00:05:37.714 EAL: Detected lcore 87 as core 3 on socket 1 00:05:37.714 EAL: Detected lcore 88 as core 4 on socket 1 00:05:37.714 EAL: Detected lcore 89 as core 5 on socket 1 00:05:37.714 EAL: Detected lcore 90 as core 6 on socket 1 00:05:37.714 EAL: Detected lcore 91 as core 8 on socket 1 00:05:37.714 EAL: Detected lcore 92 as core 9 on socket 1 00:05:37.714 EAL: Detected lcore 93 as core 10 on socket 1 00:05:37.714 EAL: Detected lcore 94 as core 11 on socket 1 00:05:37.714 EAL: Detected lcore 95 as core 12 on socket 1 00:05:37.715 EAL: Detected lcore 96 as core 13 on socket 1 00:05:37.715 EAL: Detected lcore 97 as core 14 on socket 1 00:05:37.715 EAL: Detected lcore 98 as core 16 on socket 1 00:05:37.715 EAL: Detected lcore 99 as core 17 on socket 1 00:05:37.715 EAL: Detected lcore 100 as core 18 on socket 1 00:05:37.715 EAL: Detected lcore 101 as core 19 on socket 1 00:05:37.715 EAL: Detected lcore 102 as core 20 on socket 1 00:05:37.715 EAL: Detected lcore 103 as core 21 on socket 1 00:05:37.715 EAL: Detected lcore 104 as core 22 on socket 1 00:05:37.715 EAL: Detected lcore 105 as core 24 on socket 1 00:05:37.715 EAL: Detected lcore 106 as core 25 on socket 1 00:05:37.715 EAL: Detected lcore 107 as core 26 on socket 1 00:05:37.715 EAL: Detected lcore 108 as core 27 on socket 1 00:05:37.715 EAL: Detected lcore 109 as core 28 on socket 1 00:05:37.715 EAL: Detected lcore 110 as core 29 on socket 1 00:05:37.715 EAL: Detected lcore 111 as core 30 on socket 1 00:05:37.715 EAL: Maximum logical cores by configuration: 128 00:05:37.715 EAL: Detected CPU lcores: 112 00:05:37.715 EAL: Detected NUMA nodes: 2 00:05:37.715 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:37.715 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:37.715 EAL: Checking presence of .so 'librte_eal.so' 00:05:37.715 EAL: Detected static linkage of DPDK 00:05:37.715 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.715 EAL: Bus pci wants IOVA as 'DC' 00:05:37.715 EAL: Buses did not request a specific IOVA mode. 00:05:37.715 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.715 EAL: Selected IOVA mode 'VA' 00:05:37.715 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.715 EAL: Probing VFIO support... 00:05:37.715 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.715 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.715 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.715 EAL: VFIO support initialized 00:05:37.715 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.715 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.715 EAL: Setting up physically contiguous memory... 00:05:37.715 EAL: Setting maximum number of open files to 524288 00:05:37.715 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.715 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.715 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.715 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.715 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.715 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.715 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.715 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.715 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.715 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.715 EAL: Hugepages will be freed exactly as allocated. 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: TSC frequency is ~2500000 KHz 00:05:37.715 EAL: Main lcore 0 is ready (tid=7f2b3d2dea00;cpuset=[0]) 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 0 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.715 00:05:37.715 00:05:37.715 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.715 http://cunit.sourceforge.net/ 00:05:37.715 00:05:37.715 00:05:37.715 Suite: components_suite 00:05:37.715 Test: vtophys_malloc_test ...passed 00:05:37.715 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.715 EAL: Trying to obtain current memory policy. 00:05:37.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.715 EAL: Restoring previous memory policy: 4 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.715 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.715 EAL: request: mp_malloc_sync 00:05:37.715 EAL: No shared files mode enabled, IPC is disabled 00:05:37.716 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.716 EAL: Trying to obtain current memory policy. 00:05:37.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.974 EAL: Restoring previous memory policy: 4 00:05:37.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.974 EAL: request: mp_malloc_sync 00:05:37.974 EAL: No shared files mode enabled, IPC is disabled 00:05:37.974 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.974 EAL: request: mp_malloc_sync 00:05:37.974 EAL: No shared files mode enabled, IPC is disabled 00:05:37.974 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.974 EAL: Trying to obtain current memory policy. 00:05:37.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.974 EAL: Restoring previous memory policy: 4 00:05:37.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.974 EAL: request: mp_malloc_sync 00:05:37.974 EAL: No shared files mode enabled, IPC is disabled 00:05:37.974 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.233 EAL: request: mp_malloc_sync 00:05:38.233 EAL: No shared files mode enabled, IPC is disabled 00:05:38.233 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.233 EAL: Trying to obtain current memory policy. 00:05:38.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.492 EAL: Restoring previous memory policy: 4 00:05:38.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.492 EAL: request: mp_malloc_sync 00:05:38.492 EAL: No shared files mode enabled, IPC is disabled 00:05:38.492 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.751 EAL: request: mp_malloc_sync 00:05:38.751 EAL: No shared files mode enabled, IPC is disabled 00:05:38.751 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.751 passed 00:05:38.751 00:05:38.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.751 suites 1 1 n/a 0 0 00:05:38.751 tests 2 2 2 0 0 00:05:38.751 asserts 497 497 497 0 n/a 00:05:38.751 00:05:38.751 Elapsed time = 0.960 seconds 00:05:38.751 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.751 EAL: request: mp_malloc_sync 00:05:38.751 EAL: No shared files mode enabled, IPC is disabled 00:05:38.751 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.751 EAL: No shared files mode enabled, IPC is disabled 00:05:38.751 EAL: No shared files mode enabled, IPC is disabled 00:05:38.751 EAL: No shared files mode enabled, IPC is disabled 00:05:38.751 00:05:38.751 real 0m1.079s 00:05:38.751 user 0m0.619s 00:05:38.751 sys 0m0.434s 00:05:38.751 02:46:29 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.751 02:46:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.751 ************************************ 00:05:38.751 END TEST env_vtophys 00:05:38.751 ************************************ 00:05:38.751 02:46:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.751 02:46:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.751 02:46:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.751 02:46:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.751 ************************************ 00:05:38.751 START TEST env_pci 00:05:38.751 ************************************ 00:05:38.751 02:46:29 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.751 00:05:38.751 00:05:38.751 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.751 http://cunit.sourceforge.net/ 00:05:38.751 00:05:38.751 00:05:38.751 Suite: pci 00:05:38.751 Test: pci_hook ...[2024-05-13 02:46:29.485514] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3477184 has claimed it 00:05:38.751 EAL: Cannot find device (10000:00:01.0) 00:05:38.751 EAL: Failed to attach device on primary process 00:05:38.751 passed 00:05:38.751 00:05:38.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.751 suites 1 1 n/a 0 0 00:05:38.751 tests 1 1 1 0 0 00:05:38.751 asserts 25 25 25 0 n/a 00:05:38.751 00:05:38.751 Elapsed time = 0.034 seconds 00:05:38.751 00:05:38.752 real 0m0.053s 00:05:38.752 user 0m0.010s 00:05:38.752 sys 0m0.043s 00:05:38.752 02:46:29 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.752 02:46:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.752 ************************************ 00:05:38.752 END TEST env_pci 00:05:38.752 ************************************ 00:05:39.011 02:46:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.011 02:46:29 env -- env/env.sh@15 -- # uname 00:05:39.011 02:46:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.011 02:46:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.011 02:46:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.011 02:46:29 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:39.011 02:46:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.011 02:46:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 ************************************ 00:05:39.011 START TEST env_dpdk_post_init 00:05:39.011 ************************************ 00:05:39.011 02:46:29 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.011 EAL: Detected CPU lcores: 112 00:05:39.011 EAL: Detected NUMA nodes: 2 00:05:39.011 EAL: Detected static linkage of DPDK 00:05:39.011 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.011 EAL: Selected IOVA mode 'VA' 00:05:39.011 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.011 EAL: VFIO support initialized 00:05:39.011 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.011 EAL: Using IOMMU type 1 (Type 1) 00:05:39.948 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:43.234 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:43.234 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:43.800 Starting DPDK initialization... 00:05:43.800 Starting SPDK post initialization... 00:05:43.800 SPDK NVMe probe 00:05:43.800 Attaching to 0000:d8:00.0 00:05:43.800 Attached to 0000:d8:00.0 00:05:43.800 Cleaning up... 00:05:43.800 00:05:43.800 real 0m4.754s 00:05:43.800 user 0m3.535s 00:05:43.800 sys 0m0.467s 00:05:43.800 02:46:34 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.800 02:46:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.800 ************************************ 00:05:43.800 END TEST env_dpdk_post_init 00:05:43.800 ************************************ 00:05:43.800 02:46:34 env -- env/env.sh@26 -- # uname 00:05:43.800 02:46:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.800 02:46:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.800 02:46:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.800 02:46:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.800 02:46:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.800 ************************************ 00:05:43.800 START TEST env_mem_callbacks 00:05:43.800 ************************************ 00:05:43.801 02:46:34 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.801 EAL: Detected CPU lcores: 112 00:05:43.801 EAL: Detected NUMA nodes: 2 00:05:43.801 EAL: Detected static linkage of DPDK 00:05:43.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.801 EAL: Selected IOVA mode 'VA' 00:05:43.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.801 EAL: VFIO support initialized 00:05:43.801 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.801 00:05:43.801 00:05:43.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.801 http://cunit.sourceforge.net/ 00:05:43.801 00:05:43.801 00:05:43.801 Suite: memory 00:05:43.801 Test: test ... 00:05:43.801 register 0x200000200000 2097152 00:05:43.801 malloc 3145728 00:05:43.801 register 0x200000400000 4194304 00:05:43.801 buf 0x200000500000 len 3145728 PASSED 00:05:43.801 malloc 64 00:05:43.801 buf 0x2000004fff40 len 64 PASSED 00:05:43.801 malloc 4194304 00:05:43.801 register 0x200000800000 6291456 00:05:43.801 buf 0x200000a00000 len 4194304 PASSED 00:05:43.801 free 0x200000500000 3145728 00:05:43.801 free 0x2000004fff40 64 00:05:43.801 unregister 0x200000400000 4194304 PASSED 00:05:43.801 free 0x200000a00000 4194304 00:05:43.801 unregister 0x200000800000 6291456 PASSED 00:05:43.801 malloc 8388608 00:05:43.801 register 0x200000400000 10485760 00:05:43.801 buf 0x200000600000 len 8388608 PASSED 00:05:43.801 free 0x200000600000 8388608 00:05:43.801 unregister 0x200000400000 10485760 PASSED 00:05:43.801 passed 00:05:43.801 00:05:43.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.801 suites 1 1 n/a 0 0 00:05:43.801 tests 1 1 1 0 0 00:05:43.801 asserts 15 15 15 0 n/a 00:05:43.801 00:05:43.801 Elapsed time = 0.005 seconds 00:05:43.801 00:05:43.801 real 0m0.065s 00:05:43.801 user 0m0.014s 00:05:43.801 sys 0m0.051s 00:05:43.801 02:46:34 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.801 02:46:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:43.801 ************************************ 00:05:43.801 END TEST env_mem_callbacks 00:05:43.801 ************************************ 00:05:43.801 00:05:43.801 real 0m6.615s 00:05:43.801 user 0m4.457s 00:05:43.801 sys 0m1.398s 00:05:43.801 02:46:34 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.801 02:46:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.801 ************************************ 00:05:43.801 END TEST env 00:05:43.801 ************************************ 00:05:44.059 02:46:34 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.059 02:46:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.059 02:46:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.059 02:46:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.059 ************************************ 00:05:44.059 START TEST rpc 00:05:44.059 ************************************ 00:05:44.059 02:46:34 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.059 * Looking for test storage... 00:05:44.059 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:44.059 02:46:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3478339 00:05:44.059 02:46:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.059 02:46:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.059 02:46:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3478339 00:05:44.059 02:46:34 rpc -- common/autotest_common.sh@827 -- # '[' -z 3478339 ']' 00:05:44.060 02:46:34 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.060 02:46:34 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.060 02:46:34 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.060 02:46:34 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.060 02:46:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.060 [2024-05-13 02:46:34.797623] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:44.060 [2024-05-13 02:46:34.797702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478339 ] 00:05:44.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.060 [2024-05-13 02:46:34.834993] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.318 [2024-05-13 02:46:34.866458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.318 [2024-05-13 02:46:34.905975] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.318 [2024-05-13 02:46:34.906018] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3478339' to capture a snapshot of events at runtime. 00:05:44.319 [2024-05-13 02:46:34.906028] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.319 [2024-05-13 02:46:34.906037] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.319 [2024-05-13 02:46:34.906044] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3478339 for offline analysis/debug. 00:05:44.319 [2024-05-13 02:46:34.906073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.319 02:46:35 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.319 02:46:35 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:44.319 02:46:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:44.319 02:46:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:44.319 02:46:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:44.319 02:46:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:44.319 02:46:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.319 02:46:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.319 02:46:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.319 ************************************ 00:05:44.319 START TEST rpc_integrity 00:05:44.319 ************************************ 00:05:44.319 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:44.577 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.577 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.578 { 00:05:44.578 "name": "Malloc0", 00:05:44.578 "aliases": [ 00:05:44.578 "bc597569-e3b8-4f20-9ab9-1935cca6be4b" 00:05:44.578 ], 00:05:44.578 "product_name": "Malloc disk", 00:05:44.578 "block_size": 512, 00:05:44.578 "num_blocks": 16384, 00:05:44.578 "uuid": "bc597569-e3b8-4f20-9ab9-1935cca6be4b", 00:05:44.578 "assigned_rate_limits": { 00:05:44.578 "rw_ios_per_sec": 0, 00:05:44.578 "rw_mbytes_per_sec": 0, 00:05:44.578 "r_mbytes_per_sec": 0, 00:05:44.578 "w_mbytes_per_sec": 0 00:05:44.578 }, 00:05:44.578 "claimed": false, 00:05:44.578 "zoned": false, 00:05:44.578 "supported_io_types": { 00:05:44.578 "read": true, 00:05:44.578 "write": true, 00:05:44.578 "unmap": true, 00:05:44.578 "write_zeroes": true, 00:05:44.578 "flush": true, 00:05:44.578 "reset": true, 00:05:44.578 "compare": false, 00:05:44.578 "compare_and_write": false, 00:05:44.578 "abort": true, 00:05:44.578 "nvme_admin": false, 00:05:44.578 "nvme_io": false 00:05:44.578 }, 00:05:44.578 "memory_domains": [ 00:05:44.578 { 00:05:44.578 "dma_device_id": "system", 00:05:44.578 "dma_device_type": 1 00:05:44.578 }, 00:05:44.578 { 00:05:44.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.578 "dma_device_type": 2 00:05:44.578 } 00:05:44.578 ], 00:05:44.578 "driver_specific": {} 00:05:44.578 } 00:05:44.578 ]' 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 [2024-05-13 02:46:35.250469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.578 [2024-05-13 02:46:35.250503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.578 [2024-05-13 02:46:35.250519] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5d1e8a0 00:05:44.578 [2024-05-13 02:46:35.250528] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.578 [2024-05-13 02:46:35.251347] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.578 [2024-05-13 02:46:35.251372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.578 Passthru0 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.578 { 00:05:44.578 "name": "Malloc0", 00:05:44.578 "aliases": [ 00:05:44.578 "bc597569-e3b8-4f20-9ab9-1935cca6be4b" 00:05:44.578 ], 00:05:44.578 "product_name": "Malloc disk", 00:05:44.578 "block_size": 512, 00:05:44.578 "num_blocks": 16384, 00:05:44.578 "uuid": "bc597569-e3b8-4f20-9ab9-1935cca6be4b", 00:05:44.578 "assigned_rate_limits": { 00:05:44.578 "rw_ios_per_sec": 0, 00:05:44.578 "rw_mbytes_per_sec": 0, 00:05:44.578 "r_mbytes_per_sec": 0, 00:05:44.578 "w_mbytes_per_sec": 0 00:05:44.578 }, 00:05:44.578 "claimed": true, 00:05:44.578 "claim_type": "exclusive_write", 00:05:44.578 "zoned": false, 00:05:44.578 "supported_io_types": { 00:05:44.578 "read": true, 00:05:44.578 "write": true, 00:05:44.578 "unmap": true, 00:05:44.578 "write_zeroes": true, 00:05:44.578 "flush": true, 00:05:44.578 "reset": true, 00:05:44.578 "compare": false, 00:05:44.578 "compare_and_write": false, 00:05:44.578 "abort": true, 00:05:44.578 "nvme_admin": false, 00:05:44.578 "nvme_io": false 00:05:44.578 }, 00:05:44.578 "memory_domains": [ 00:05:44.578 { 00:05:44.578 "dma_device_id": "system", 00:05:44.578 "dma_device_type": 1 00:05:44.578 }, 00:05:44.578 { 00:05:44.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.578 "dma_device_type": 2 00:05:44.578 } 00:05:44.578 ], 00:05:44.578 "driver_specific": {} 00:05:44.578 }, 00:05:44.578 { 00:05:44.578 "name": "Passthru0", 00:05:44.578 "aliases": [ 00:05:44.578 "477c262f-251f-5262-b43b-f2d3576feb87" 00:05:44.578 ], 00:05:44.578 "product_name": "passthru", 00:05:44.578 "block_size": 512, 00:05:44.578 "num_blocks": 16384, 00:05:44.578 "uuid": "477c262f-251f-5262-b43b-f2d3576feb87", 00:05:44.578 "assigned_rate_limits": { 00:05:44.578 "rw_ios_per_sec": 0, 00:05:44.578 "rw_mbytes_per_sec": 0, 00:05:44.578 "r_mbytes_per_sec": 0, 00:05:44.578 "w_mbytes_per_sec": 0 00:05:44.578 }, 00:05:44.578 "claimed": false, 00:05:44.578 "zoned": false, 00:05:44.578 "supported_io_types": { 00:05:44.578 "read": true, 00:05:44.578 "write": true, 00:05:44.578 "unmap": true, 00:05:44.578 "write_zeroes": true, 00:05:44.578 "flush": true, 00:05:44.578 "reset": true, 00:05:44.578 "compare": false, 00:05:44.578 "compare_and_write": false, 00:05:44.578 "abort": true, 00:05:44.578 "nvme_admin": false, 00:05:44.578 "nvme_io": false 00:05:44.578 }, 00:05:44.578 "memory_domains": [ 00:05:44.578 { 00:05:44.578 "dma_device_id": "system", 00:05:44.578 "dma_device_type": 1 00:05:44.578 }, 00:05:44.578 { 00:05:44.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.578 "dma_device_type": 2 00:05:44.578 } 00:05:44.578 ], 00:05:44.578 "driver_specific": { 00:05:44.578 "passthru": { 00:05:44.578 "name": "Passthru0", 00:05:44.578 "base_bdev_name": "Malloc0" 00:05:44.578 } 00:05:44.578 } 00:05:44.578 } 00:05:44.578 ]' 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.578 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.578 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:44.837 02:46:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.837 00:05:44.837 real 0m0.276s 00:05:44.837 user 0m0.158s 00:05:44.837 sys 0m0.058s 00:05:44.837 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.837 02:46:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.837 ************************************ 00:05:44.837 END TEST rpc_integrity 00:05:44.837 ************************************ 00:05:44.837 02:46:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:44.837 02:46:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.837 02:46:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.837 02:46:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.837 ************************************ 00:05:44.837 START TEST rpc_plugins 00:05:44.837 ************************************ 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:44.837 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.837 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.837 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.837 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.837 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:44.837 { 00:05:44.837 "name": "Malloc1", 00:05:44.837 "aliases": [ 00:05:44.837 "3f664b5b-46f7-4eb7-aff2-0cd5d176c147" 00:05:44.837 ], 00:05:44.837 "product_name": "Malloc disk", 00:05:44.837 "block_size": 4096, 00:05:44.837 "num_blocks": 256, 00:05:44.837 "uuid": "3f664b5b-46f7-4eb7-aff2-0cd5d176c147", 00:05:44.837 "assigned_rate_limits": { 00:05:44.837 "rw_ios_per_sec": 0, 00:05:44.837 "rw_mbytes_per_sec": 0, 00:05:44.838 "r_mbytes_per_sec": 0, 00:05:44.838 "w_mbytes_per_sec": 0 00:05:44.838 }, 00:05:44.838 "claimed": false, 00:05:44.838 "zoned": false, 00:05:44.838 "supported_io_types": { 00:05:44.838 "read": true, 00:05:44.838 "write": true, 00:05:44.838 "unmap": true, 00:05:44.838 "write_zeroes": true, 00:05:44.838 "flush": true, 00:05:44.838 "reset": true, 00:05:44.838 "compare": false, 00:05:44.838 "compare_and_write": false, 00:05:44.838 "abort": true, 00:05:44.838 "nvme_admin": false, 00:05:44.838 "nvme_io": false 00:05:44.838 }, 00:05:44.838 "memory_domains": [ 00:05:44.838 { 00:05:44.838 "dma_device_id": "system", 00:05:44.838 "dma_device_type": 1 00:05:44.838 }, 00:05:44.838 { 00:05:44.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.838 "dma_device_type": 2 00:05:44.838 } 00:05:44.838 ], 00:05:44.838 "driver_specific": {} 00:05:44.838 } 00:05:44.838 ]' 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:44.838 02:46:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:44.838 00:05:44.838 real 0m0.144s 00:05:44.838 user 0m0.094s 00:05:44.838 sys 0m0.018s 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.838 02:46:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.838 ************************************ 00:05:44.838 END TEST rpc_plugins 00:05:44.838 ************************************ 00:05:45.097 02:46:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.097 02:46:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.097 02:46:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.097 02:46:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.097 ************************************ 00:05:45.097 START TEST rpc_trace_cmd_test 00:05:45.097 ************************************ 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.097 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3478339", 00:05:45.097 "tpoint_group_mask": "0x8", 00:05:45.097 "iscsi_conn": { 00:05:45.097 "mask": "0x2", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "scsi": { 00:05:45.097 "mask": "0x4", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "bdev": { 00:05:45.097 "mask": "0x8", 00:05:45.097 "tpoint_mask": "0xffffffffffffffff" 00:05:45.097 }, 00:05:45.097 "nvmf_rdma": { 00:05:45.097 "mask": "0x10", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "nvmf_tcp": { 00:05:45.097 "mask": "0x20", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "ftl": { 00:05:45.097 "mask": "0x40", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "blobfs": { 00:05:45.097 "mask": "0x80", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "dsa": { 00:05:45.097 "mask": "0x200", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "thread": { 00:05:45.097 "mask": "0x400", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "nvme_pcie": { 00:05:45.097 "mask": "0x800", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "iaa": { 00:05:45.097 "mask": "0x1000", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "nvme_tcp": { 00:05:45.097 "mask": "0x2000", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "bdev_nvme": { 00:05:45.097 "mask": "0x4000", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 }, 00:05:45.097 "sock": { 00:05:45.097 "mask": "0x8000", 00:05:45.097 "tpoint_mask": "0x0" 00:05:45.097 } 00:05:45.097 }' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.097 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.357 02:46:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.357 00:05:45.357 real 0m0.219s 00:05:45.357 user 0m0.186s 00:05:45.357 sys 0m0.028s 00:05:45.357 02:46:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.357 02:46:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 ************************************ 00:05:45.357 END TEST rpc_trace_cmd_test 00:05:45.357 ************************************ 00:05:45.357 02:46:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.357 02:46:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.357 02:46:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.357 02:46:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.357 02:46:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.357 02:46:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 ************************************ 00:05:45.357 START TEST rpc_daemon_integrity 00:05:45.357 ************************************ 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.357 { 00:05:45.357 "name": "Malloc2", 00:05:45.357 "aliases": [ 00:05:45.357 "0cedd142-0bb0-4148-bb92-81432e015fbd" 00:05:45.357 ], 00:05:45.357 "product_name": "Malloc disk", 00:05:45.357 "block_size": 512, 00:05:45.357 "num_blocks": 16384, 00:05:45.357 "uuid": "0cedd142-0bb0-4148-bb92-81432e015fbd", 00:05:45.357 "assigned_rate_limits": { 00:05:45.357 "rw_ios_per_sec": 0, 00:05:45.357 "rw_mbytes_per_sec": 0, 00:05:45.357 "r_mbytes_per_sec": 0, 00:05:45.357 "w_mbytes_per_sec": 0 00:05:45.357 }, 00:05:45.357 "claimed": false, 00:05:45.357 "zoned": false, 00:05:45.357 "supported_io_types": { 00:05:45.357 "read": true, 00:05:45.357 "write": true, 00:05:45.357 "unmap": true, 00:05:45.357 "write_zeroes": true, 00:05:45.357 "flush": true, 00:05:45.357 "reset": true, 00:05:45.357 "compare": false, 00:05:45.357 "compare_and_write": false, 00:05:45.357 "abort": true, 00:05:45.357 "nvme_admin": false, 00:05:45.357 "nvme_io": false 00:05:45.357 }, 00:05:45.357 "memory_domains": [ 00:05:45.357 { 00:05:45.357 "dma_device_id": "system", 00:05:45.357 "dma_device_type": 1 00:05:45.357 }, 00:05:45.357 { 00:05:45.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.357 "dma_device_type": 2 00:05:45.357 } 00:05:45.357 ], 00:05:45.357 "driver_specific": {} 00:05:45.357 } 00:05:45.357 ]' 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.357 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 [2024-05-13 02:46:36.156851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:45.357 [2024-05-13 02:46:36.156885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.357 [2024-05-13 02:46:36.156902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5eb2770 00:05:45.357 [2024-05-13 02:46:36.156911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.357 [2024-05-13 02:46:36.157608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.357 [2024-05-13 02:46:36.157631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.618 Passthru0 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.618 { 00:05:45.618 "name": "Malloc2", 00:05:45.618 "aliases": [ 00:05:45.618 "0cedd142-0bb0-4148-bb92-81432e015fbd" 00:05:45.618 ], 00:05:45.618 "product_name": "Malloc disk", 00:05:45.618 "block_size": 512, 00:05:45.618 "num_blocks": 16384, 00:05:45.618 "uuid": "0cedd142-0bb0-4148-bb92-81432e015fbd", 00:05:45.618 "assigned_rate_limits": { 00:05:45.618 "rw_ios_per_sec": 0, 00:05:45.618 "rw_mbytes_per_sec": 0, 00:05:45.618 "r_mbytes_per_sec": 0, 00:05:45.618 "w_mbytes_per_sec": 0 00:05:45.618 }, 00:05:45.618 "claimed": true, 00:05:45.618 "claim_type": "exclusive_write", 00:05:45.618 "zoned": false, 00:05:45.618 "supported_io_types": { 00:05:45.618 "read": true, 00:05:45.618 "write": true, 00:05:45.618 "unmap": true, 00:05:45.618 "write_zeroes": true, 00:05:45.618 "flush": true, 00:05:45.618 "reset": true, 00:05:45.618 "compare": false, 00:05:45.618 "compare_and_write": false, 00:05:45.618 "abort": true, 00:05:45.618 "nvme_admin": false, 00:05:45.618 "nvme_io": false 00:05:45.618 }, 00:05:45.618 "memory_domains": [ 00:05:45.618 { 00:05:45.618 "dma_device_id": "system", 00:05:45.618 "dma_device_type": 1 00:05:45.618 }, 00:05:45.618 { 00:05:45.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.618 "dma_device_type": 2 00:05:45.618 } 00:05:45.618 ], 00:05:45.618 "driver_specific": {} 00:05:45.618 }, 00:05:45.618 { 00:05:45.618 "name": "Passthru0", 00:05:45.618 "aliases": [ 00:05:45.618 "a7f2b7f2-4b22-55e2-88af-cffcadef30b7" 00:05:45.618 ], 00:05:45.618 "product_name": "passthru", 00:05:45.618 "block_size": 512, 00:05:45.618 "num_blocks": 16384, 00:05:45.618 "uuid": "a7f2b7f2-4b22-55e2-88af-cffcadef30b7", 00:05:45.618 "assigned_rate_limits": { 00:05:45.618 "rw_ios_per_sec": 0, 00:05:45.618 "rw_mbytes_per_sec": 0, 00:05:45.618 "r_mbytes_per_sec": 0, 00:05:45.618 "w_mbytes_per_sec": 0 00:05:45.618 }, 00:05:45.618 "claimed": false, 00:05:45.618 "zoned": false, 00:05:45.618 "supported_io_types": { 00:05:45.618 "read": true, 00:05:45.618 "write": true, 00:05:45.618 "unmap": true, 00:05:45.618 "write_zeroes": true, 00:05:45.618 "flush": true, 00:05:45.618 "reset": true, 00:05:45.618 "compare": false, 00:05:45.618 "compare_and_write": false, 00:05:45.618 "abort": true, 00:05:45.618 "nvme_admin": false, 00:05:45.618 "nvme_io": false 00:05:45.618 }, 00:05:45.618 "memory_domains": [ 00:05:45.618 { 00:05:45.618 "dma_device_id": "system", 00:05:45.618 "dma_device_type": 1 00:05:45.618 }, 00:05:45.618 { 00:05:45.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.618 "dma_device_type": 2 00:05:45.618 } 00:05:45.618 ], 00:05:45.618 "driver_specific": { 00:05:45.618 "passthru": { 00:05:45.618 "name": "Passthru0", 00:05:45.618 "base_bdev_name": "Malloc2" 00:05:45.618 } 00:05:45.618 } 00:05:45.618 } 00:05:45.618 ]' 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.618 00:05:45.618 real 0m0.279s 00:05:45.618 user 0m0.177s 00:05:45.618 sys 0m0.050s 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.618 02:46:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.618 ************************************ 00:05:45.618 END TEST rpc_daemon_integrity 00:05:45.618 ************************************ 00:05:45.618 02:46:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.618 02:46:36 rpc -- rpc/rpc.sh@84 -- # killprocess 3478339 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@946 -- # '[' -z 3478339 ']' 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@950 -- # kill -0 3478339 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@951 -- # uname 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3478339 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3478339' 00:05:45.618 killing process with pid 3478339 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@965 -- # kill 3478339 00:05:45.618 02:46:36 rpc -- common/autotest_common.sh@970 -- # wait 3478339 00:05:46.188 00:05:46.188 real 0m2.019s 00:05:46.188 user 0m2.581s 00:05:46.188 sys 0m0.765s 00:05:46.188 02:46:36 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.188 02:46:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 ************************************ 00:05:46.188 END TEST rpc 00:05:46.188 ************************************ 00:05:46.188 02:46:36 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:46.188 02:46:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.188 02:46:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.188 02:46:36 -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 ************************************ 00:05:46.188 START TEST skip_rpc 00:05:46.188 ************************************ 00:05:46.188 02:46:36 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:46.188 * Looking for test storage... 00:05:46.188 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:46.188 02:46:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:46.188 02:46:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:46.188 02:46:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.188 02:46:36 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.188 02:46:36 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.188 02:46:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 ************************************ 00:05:46.188 START TEST skip_rpc 00:05:46.188 ************************************ 00:05:46.188 02:46:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:46.188 02:46:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3478796 00:05:46.188 02:46:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.188 02:46:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:46.188 02:46:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:46.188 [2024-05-13 02:46:36.938621] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:46.188 [2024-05-13 02:46:36.938705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478796 ] 00:05:46.188 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.188 [2024-05-13 02:46:36.975567] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.447 [2024-05-13 02:46:37.007109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.447 [2024-05-13 02:46:37.045669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3478796 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3478796 ']' 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3478796 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3478796 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3478796' 00:05:51.810 killing process with pid 3478796 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3478796 00:05:51.810 02:46:41 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3478796 00:05:51.810 00:05:51.810 real 0m5.350s 00:05:51.810 user 0m5.109s 00:05:51.810 sys 0m0.270s 00:05:51.810 02:46:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.810 02:46:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.810 ************************************ 00:05:51.810 END TEST skip_rpc 00:05:51.810 ************************************ 00:05:51.810 02:46:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:51.810 02:46:42 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.810 02:46:42 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.810 02:46:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.810 ************************************ 00:05:51.810 START TEST skip_rpc_with_json 00:05:51.810 ************************************ 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3479771 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3479771 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3479771 ']' 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.810 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.810 [2024-05-13 02:46:42.366489] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:51.810 [2024-05-13 02:46:42.366570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479771 ] 00:05:51.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.810 [2024-05-13 02:46:42.403605] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.810 [2024-05-13 02:46:42.435811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.810 [2024-05-13 02:46:42.475226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.069 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.069 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:52.069 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:52.069 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.069 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.070 [2024-05-13 02:46:42.663745] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:52.070 request: 00:05:52.070 { 00:05:52.070 "trtype": "tcp", 00:05:52.070 "method": "nvmf_get_transports", 00:05:52.070 "req_id": 1 00:05:52.070 } 00:05:52.070 Got JSON-RPC error response 00:05:52.070 response: 00:05:52.070 { 00:05:52.070 "code": -19, 00:05:52.070 "message": "No such device" 00:05:52.070 } 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.070 [2024-05-13 02:46:42.671818] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.070 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:52.070 { 00:05:52.070 "subsystems": [ 00:05:52.070 { 00:05:52.070 "subsystem": "scheduler", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "framework_set_scheduler", 00:05:52.070 "params": { 00:05:52.070 "name": "static" 00:05:52.070 } 00:05:52.070 } 00:05:52.070 ] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "vmd", 00:05:52.070 "config": [] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "sock", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "sock_impl_set_options", 00:05:52.070 "params": { 00:05:52.070 "impl_name": "posix", 00:05:52.070 "recv_buf_size": 2097152, 00:05:52.070 "send_buf_size": 2097152, 00:05:52.070 "enable_recv_pipe": true, 00:05:52.070 "enable_quickack": false, 00:05:52.070 "enable_placement_id": 0, 00:05:52.070 "enable_zerocopy_send_server": true, 00:05:52.070 "enable_zerocopy_send_client": false, 00:05:52.070 "zerocopy_threshold": 0, 00:05:52.070 "tls_version": 0, 00:05:52.070 "enable_ktls": false 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "sock_impl_set_options", 00:05:52.070 "params": { 00:05:52.070 "impl_name": "ssl", 00:05:52.070 "recv_buf_size": 4096, 00:05:52.070 "send_buf_size": 4096, 00:05:52.070 "enable_recv_pipe": true, 00:05:52.070 "enable_quickack": false, 00:05:52.070 "enable_placement_id": 0, 00:05:52.070 "enable_zerocopy_send_server": true, 00:05:52.070 "enable_zerocopy_send_client": false, 00:05:52.070 "zerocopy_threshold": 0, 00:05:52.070 "tls_version": 0, 00:05:52.070 "enable_ktls": false 00:05:52.070 } 00:05:52.070 } 00:05:52.070 ] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "iobuf", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "iobuf_set_options", 00:05:52.070 "params": { 00:05:52.070 "small_pool_count": 8192, 00:05:52.070 "large_pool_count": 1024, 00:05:52.070 "small_bufsize": 8192, 00:05:52.070 "large_bufsize": 135168 00:05:52.070 } 00:05:52.070 } 00:05:52.070 ] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "keyring", 00:05:52.070 "config": [] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "vfio_user_target", 00:05:52.070 "config": null 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "accel", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "accel_set_options", 00:05:52.070 "params": { 00:05:52.070 "small_cache_size": 128, 00:05:52.070 "large_cache_size": 16, 00:05:52.070 "task_count": 2048, 00:05:52.070 "sequence_count": 2048, 00:05:52.070 "buf_count": 2048 00:05:52.070 } 00:05:52.070 } 00:05:52.070 ] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "bdev", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "bdev_set_options", 00:05:52.070 "params": { 00:05:52.070 "bdev_io_pool_size": 65535, 00:05:52.070 "bdev_io_cache_size": 256, 00:05:52.070 "bdev_auto_examine": true, 00:05:52.070 "iobuf_small_cache_size": 128, 00:05:52.070 "iobuf_large_cache_size": 16 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "bdev_raid_set_options", 00:05:52.070 "params": { 00:05:52.070 "process_window_size_kb": 1024 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "bdev_nvme_set_options", 00:05:52.070 "params": { 00:05:52.070 "action_on_timeout": "none", 00:05:52.070 "timeout_us": 0, 00:05:52.070 "timeout_admin_us": 0, 00:05:52.070 "keep_alive_timeout_ms": 10000, 00:05:52.070 "arbitration_burst": 0, 00:05:52.070 "low_priority_weight": 0, 00:05:52.070 "medium_priority_weight": 0, 00:05:52.070 "high_priority_weight": 0, 00:05:52.070 "nvme_adminq_poll_period_us": 10000, 00:05:52.070 "nvme_ioq_poll_period_us": 0, 00:05:52.070 "io_queue_requests": 0, 00:05:52.070 "delay_cmd_submit": true, 00:05:52.070 "transport_retry_count": 4, 00:05:52.070 "bdev_retry_count": 3, 00:05:52.070 "transport_ack_timeout": 0, 00:05:52.070 "ctrlr_loss_timeout_sec": 0, 00:05:52.070 "reconnect_delay_sec": 0, 00:05:52.070 "fast_io_fail_timeout_sec": 0, 00:05:52.070 "disable_auto_failback": false, 00:05:52.070 "generate_uuids": false, 00:05:52.070 "transport_tos": 0, 00:05:52.070 "nvme_error_stat": false, 00:05:52.070 "rdma_srq_size": 0, 00:05:52.070 "io_path_stat": false, 00:05:52.070 "allow_accel_sequence": false, 00:05:52.070 "rdma_max_cq_size": 0, 00:05:52.070 "rdma_cm_event_timeout_ms": 0, 00:05:52.070 "dhchap_digests": [ 00:05:52.070 "sha256", 00:05:52.070 "sha384", 00:05:52.070 "sha512" 00:05:52.070 ], 00:05:52.070 "dhchap_dhgroups": [ 00:05:52.070 "null", 00:05:52.070 "ffdhe2048", 00:05:52.070 "ffdhe3072", 00:05:52.070 "ffdhe4096", 00:05:52.070 "ffdhe6144", 00:05:52.070 "ffdhe8192" 00:05:52.070 ] 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "bdev_nvme_set_hotplug", 00:05:52.070 "params": { 00:05:52.070 "period_us": 100000, 00:05:52.070 "enable": false 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "bdev_iscsi_set_options", 00:05:52.070 "params": { 00:05:52.070 "timeout_sec": 30 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "bdev_wait_for_examine" 00:05:52.070 } 00:05:52.070 ] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "nvmf", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "nvmf_set_config", 00:05:52.070 "params": { 00:05:52.070 "discovery_filter": "match_any", 00:05:52.070 "admin_cmd_passthru": { 00:05:52.070 "identify_ctrlr": false 00:05:52.070 } 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "nvmf_set_max_subsystems", 00:05:52.070 "params": { 00:05:52.070 "max_subsystems": 1024 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "nvmf_set_crdt", 00:05:52.070 "params": { 00:05:52.070 "crdt1": 0, 00:05:52.070 "crdt2": 0, 00:05:52.070 "crdt3": 0 00:05:52.070 } 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "method": "nvmf_create_transport", 00:05:52.070 "params": { 00:05:52.070 "trtype": "TCP", 00:05:52.070 "max_queue_depth": 128, 00:05:52.070 "max_io_qpairs_per_ctrlr": 127, 00:05:52.070 "in_capsule_data_size": 4096, 00:05:52.070 "max_io_size": 131072, 00:05:52.070 "io_unit_size": 131072, 00:05:52.070 "max_aq_depth": 128, 00:05:52.070 "num_shared_buffers": 511, 00:05:52.070 "buf_cache_size": 4294967295, 00:05:52.070 "dif_insert_or_strip": false, 00:05:52.070 "zcopy": false, 00:05:52.070 "c2h_success": true, 00:05:52.070 "sock_priority": 0, 00:05:52.070 "abort_timeout_sec": 1, 00:05:52.070 "ack_timeout": 0, 00:05:52.070 "data_wr_pool_size": 0 00:05:52.070 } 00:05:52.070 } 00:05:52.070 ] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "nbd", 00:05:52.070 "config": [] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "ublk", 00:05:52.070 "config": [] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "vhost_blk", 00:05:52.070 "config": [] 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "scsi", 00:05:52.070 "config": null 00:05:52.070 }, 00:05:52.070 { 00:05:52.070 "subsystem": "iscsi", 00:05:52.070 "config": [ 00:05:52.070 { 00:05:52.070 "method": "iscsi_set_options", 00:05:52.070 "params": { 00:05:52.070 "node_base": "iqn.2016-06.io.spdk", 00:05:52.070 "max_sessions": 128, 00:05:52.070 "max_connections_per_session": 2, 00:05:52.070 "max_queue_depth": 64, 00:05:52.070 "default_time2wait": 2, 00:05:52.070 "default_time2retain": 20, 00:05:52.070 "first_burst_length": 8192, 00:05:52.070 "immediate_data": true, 00:05:52.070 "allow_duplicated_isid": false, 00:05:52.070 "error_recovery_level": 0, 00:05:52.070 "nop_timeout": 60, 00:05:52.070 "nop_in_interval": 30, 00:05:52.070 "disable_chap": false, 00:05:52.070 "require_chap": false, 00:05:52.070 "mutual_chap": false, 00:05:52.070 "chap_group": 0, 00:05:52.070 "max_large_datain_per_connection": 64, 00:05:52.070 "max_r2t_per_connection": 4, 00:05:52.070 "pdu_pool_size": 36864, 00:05:52.070 "immediate_data_pool_size": 16384, 00:05:52.071 "data_out_pool_size": 2048 00:05:52.071 } 00:05:52.071 } 00:05:52.071 ] 00:05:52.071 }, 00:05:52.071 { 00:05:52.071 "subsystem": "vhost_scsi", 00:05:52.071 "config": [] 00:05:52.071 } 00:05:52.071 ] 00:05:52.071 } 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3479771 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3479771 ']' 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3479771 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.071 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3479771 00:05:52.330 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.330 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.330 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3479771' 00:05:52.330 killing process with pid 3479771 00:05:52.330 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3479771 00:05:52.330 02:46:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3479771 00:05:52.589 02:46:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3479904 00:05:52.589 02:46:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:52.589 02:46:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3479904 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3479904 ']' 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3479904 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3479904 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3479904' 00:05:57.877 killing process with pid 3479904 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3479904 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3479904 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:57.877 00:05:57.877 real 0m6.189s 00:05:57.877 user 0m5.867s 00:05:57.877 sys 0m0.587s 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.877 ************************************ 00:05:57.877 END TEST skip_rpc_with_json 00:05:57.877 ************************************ 00:05:57.877 02:46:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:57.877 02:46:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.877 02:46:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.877 02:46:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.877 ************************************ 00:05:57.877 START TEST skip_rpc_with_delay 00:05:57.877 ************************************ 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.877 [2024-05-13 02:46:48.644586] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:57.877 [2024-05-13 02:46:48.644697] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.877 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.877 00:05:57.878 real 0m0.043s 00:05:57.878 user 0m0.017s 00:05:57.878 sys 0m0.027s 00:05:57.878 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.878 02:46:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:57.878 ************************************ 00:05:57.878 END TEST skip_rpc_with_delay 00:05:57.878 ************************************ 00:05:58.137 02:46:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:58.137 02:46:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:58.137 02:46:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:58.137 02:46:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.137 02:46:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.137 02:46:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.137 ************************************ 00:05:58.137 START TEST exit_on_failed_rpc_init 00:05:58.137 ************************************ 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3481015 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3481015 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3481015 ']' 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.137 02:46:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.137 [2024-05-13 02:46:48.778656] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:58.137 [2024-05-13 02:46:48.778731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481015 ] 00:05:58.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.137 [2024-05-13 02:46:48.815862] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.137 [2024-05-13 02:46:48.846672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.137 [2024-05-13 02:46:48.885058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:58.397 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:58.397 [2024-05-13 02:46:49.093624] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:58.397 [2024-05-13 02:46:49.093708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481025 ] 00:05:58.397 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.397 [2024-05-13 02:46:49.129667] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.397 [2024-05-13 02:46:49.160748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.397 [2024-05-13 02:46:49.200030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.397 [2024-05-13 02:46:49.200121] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:58.397 [2024-05-13 02:46:49.200135] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:58.397 [2024-05-13 02:46:49.200143] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3481015 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3481015 ']' 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3481015 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3481015 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3481015' 00:05:58.656 killing process with pid 3481015 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3481015 00:05:58.656 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3481015 00:05:58.915 00:05:58.915 real 0m0.848s 00:05:58.915 user 0m0.859s 00:05:58.915 sys 0m0.390s 00:05:58.915 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.915 02:46:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.915 ************************************ 00:05:58.915 END TEST exit_on_failed_rpc_init 00:05:58.915 ************************************ 00:05:58.915 02:46:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:58.915 00:05:58.915 real 0m12.874s 00:05:58.915 user 0m12.016s 00:05:58.915 sys 0m1.576s 00:05:58.915 02:46:49 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.915 02:46:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.915 ************************************ 00:05:58.915 END TEST skip_rpc 00:05:58.915 ************************************ 00:05:58.915 02:46:49 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.915 02:46:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.915 02:46:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.915 02:46:49 -- common/autotest_common.sh@10 -- # set +x 00:05:59.174 ************************************ 00:05:59.174 START TEST rpc_client 00:05:59.174 ************************************ 00:05:59.174 02:46:49 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:59.174 * Looking for test storage... 00:05:59.174 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:59.174 02:46:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:59.174 OK 00:05:59.174 02:46:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:59.174 00:05:59.174 real 0m0.132s 00:05:59.174 user 0m0.061s 00:05:59.174 sys 0m0.081s 00:05:59.174 02:46:49 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.174 02:46:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:59.174 ************************************ 00:05:59.174 END TEST rpc_client 00:05:59.174 ************************************ 00:05:59.174 02:46:49 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:59.174 02:46:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.174 02:46:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.174 02:46:49 -- common/autotest_common.sh@10 -- # set +x 00:05:59.174 ************************************ 00:05:59.174 START TEST json_config 00:05:59.174 ************************************ 00:05:59.174 02:46:49 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:59.434 02:46:50 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.434 02:46:50 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.434 02:46:50 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.434 02:46:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.434 02:46:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.434 02:46:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.434 02:46:50 json_config -- paths/export.sh@5 -- # export PATH 00:05:59.434 02:46:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@47 -- # : 0 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.434 02:46:50 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:59.434 WARNING: No tests are enabled so not running JSON configuration tests 00:05:59.434 02:46:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:59.434 00:05:59.434 real 0m0.106s 00:05:59.434 user 0m0.056s 00:05:59.434 sys 0m0.051s 00:05:59.434 02:46:50 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.434 02:46:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.434 ************************************ 00:05:59.434 END TEST json_config 00:05:59.434 ************************************ 00:05:59.434 02:46:50 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.434 02:46:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.434 02:46:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.434 02:46:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.434 ************************************ 00:05:59.434 START TEST json_config_extra_key 00:05:59.434 ************************************ 00:05:59.434 02:46:50 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.434 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.434 02:46:50 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:59.694 02:46:50 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.694 02:46:50 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.694 02:46:50 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.694 02:46:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.694 02:46:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.694 02:46:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.694 02:46:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:59.694 02:46:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.694 02:46:50 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:59.694 INFO: launching applications... 00:05:59.694 02:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3481425 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.694 Waiting for target to run... 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3481425 /var/tmp/spdk_tgt.sock 00:05:59.694 02:46:50 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3481425 ']' 00:05:59.694 02:46:50 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.694 02:46:50 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.694 02:46:50 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.694 02:46:50 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.694 02:46:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.694 02:46:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.694 [2024-05-13 02:46:50.267317] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:05:59.694 [2024-05-13 02:46:50.267378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481425 ] 00:05:59.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.953 [2024-05-13 02:46:50.517875] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.953 [2024-05-13 02:46:50.549872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.953 [2024-05-13 02:46:50.571707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.521 02:46:51 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.521 02:46:51 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:00.521 00:06:00.521 02:46:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:00.521 INFO: shutting down applications... 00:06:00.521 02:46:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3481425 ]] 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3481425 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3481425 00:06:00.521 02:46:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3481425 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:00.780 02:46:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:00.780 SPDK target shutdown done 00:06:00.780 02:46:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:00.780 Success 00:06:00.780 00:06:00.780 real 0m1.435s 00:06:00.780 user 0m1.159s 00:06:00.780 sys 0m0.401s 00:06:00.780 02:46:51 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.780 02:46:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.780 ************************************ 00:06:00.780 END TEST json_config_extra_key 00:06:00.780 ************************************ 00:06:01.039 02:46:51 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.039 02:46:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.039 02:46:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.039 02:46:51 -- common/autotest_common.sh@10 -- # set +x 00:06:01.039 ************************************ 00:06:01.039 START TEST alias_rpc 00:06:01.039 ************************************ 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.039 * Looking for test storage... 00:06:01.039 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:01.039 02:46:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.039 02:46:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3481741 00:06:01.039 02:46:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.039 02:46:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3481741 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3481741 ']' 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.039 02:46:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.039 [2024-05-13 02:46:51.799564] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:01.039 [2024-05-13 02:46:51.799633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481741 ] 00:06:01.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.039 [2024-05-13 02:46:51.835350] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.298 [2024-05-13 02:46:51.867027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.298 [2024-05-13 02:46:51.905289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.298 02:46:52 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.298 02:46:52 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:01.298 02:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:01.558 02:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3481741 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3481741 ']' 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3481741 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3481741 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3481741' 00:06:01.558 killing process with pid 3481741 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@965 -- # kill 3481741 00:06:01.558 02:46:52 alias_rpc -- common/autotest_common.sh@970 -- # wait 3481741 00:06:01.817 00:06:01.817 real 0m0.950s 00:06:01.817 user 0m0.929s 00:06:01.817 sys 0m0.398s 00:06:01.817 02:46:52 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.817 02:46:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.817 ************************************ 00:06:01.817 END TEST alias_rpc 00:06:01.817 ************************************ 00:06:02.075 02:46:52 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:02.075 02:46:52 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.075 02:46:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.075 02:46:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.075 02:46:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 ************************************ 00:06:02.075 START TEST spdkcli_tcp 00:06:02.075 ************************************ 00:06:02.075 02:46:52 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.075 * Looking for test storage... 00:06:02.075 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:02.075 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3481827 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3481827 00:06:02.076 02:46:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3481827 ']' 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.076 02:46:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.076 [2024-05-13 02:46:52.850545] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:02.076 [2024-05-13 02:46:52.850608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481827 ] 00:06:02.334 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.334 [2024-05-13 02:46:52.887136] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.334 [2024-05-13 02:46:52.919344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.334 [2024-05-13 02:46:52.958620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.334 [2024-05-13 02:46:52.958623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.594 02:46:53 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.594 02:46:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:02.594 02:46:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3481973 00:06:02.594 02:46:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:02.594 02:46:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.594 [ 00:06:02.594 "spdk_get_version", 00:06:02.594 "rpc_get_methods", 00:06:02.594 "trace_get_info", 00:06:02.594 "trace_get_tpoint_group_mask", 00:06:02.594 "trace_disable_tpoint_group", 00:06:02.594 "trace_enable_tpoint_group", 00:06:02.594 "trace_clear_tpoint_mask", 00:06:02.594 "trace_set_tpoint_mask", 00:06:02.594 "vfu_tgt_set_base_path", 00:06:02.594 "framework_get_pci_devices", 00:06:02.594 "framework_get_config", 00:06:02.594 "framework_get_subsystems", 00:06:02.594 "keyring_get_keys", 00:06:02.594 "iobuf_get_stats", 00:06:02.594 "iobuf_set_options", 00:06:02.594 "sock_get_default_impl", 00:06:02.594 "sock_set_default_impl", 00:06:02.594 "sock_impl_set_options", 00:06:02.594 "sock_impl_get_options", 00:06:02.594 "vmd_rescan", 00:06:02.594 "vmd_remove_device", 00:06:02.594 "vmd_enable", 00:06:02.594 "accel_get_stats", 00:06:02.594 "accel_set_options", 00:06:02.594 "accel_set_driver", 00:06:02.594 "accel_crypto_key_destroy", 00:06:02.594 "accel_crypto_keys_get", 00:06:02.594 "accel_crypto_key_create", 00:06:02.594 "accel_assign_opc", 00:06:02.594 "accel_get_module_info", 00:06:02.594 "accel_get_opc_assignments", 00:06:02.594 "notify_get_notifications", 00:06:02.594 "notify_get_types", 00:06:02.594 "bdev_get_histogram", 00:06:02.594 "bdev_enable_histogram", 00:06:02.594 "bdev_set_qos_limit", 00:06:02.594 "bdev_set_qd_sampling_period", 00:06:02.594 "bdev_get_bdevs", 00:06:02.594 "bdev_reset_iostat", 00:06:02.594 "bdev_get_iostat", 00:06:02.594 "bdev_examine", 00:06:02.594 "bdev_wait_for_examine", 00:06:02.594 "bdev_set_options", 00:06:02.594 "scsi_get_devices", 00:06:02.594 "thread_set_cpumask", 00:06:02.594 "framework_get_scheduler", 00:06:02.594 "framework_set_scheduler", 00:06:02.594 "framework_get_reactors", 00:06:02.594 "thread_get_io_channels", 00:06:02.594 "thread_get_pollers", 00:06:02.594 "thread_get_stats", 00:06:02.594 "framework_monitor_context_switch", 00:06:02.594 "spdk_kill_instance", 00:06:02.594 "log_enable_timestamps", 00:06:02.594 "log_get_flags", 00:06:02.594 "log_clear_flag", 00:06:02.594 "log_set_flag", 00:06:02.594 "log_get_level", 00:06:02.594 "log_set_level", 00:06:02.594 "log_get_print_level", 00:06:02.594 "log_set_print_level", 00:06:02.594 "framework_enable_cpumask_locks", 00:06:02.594 "framework_disable_cpumask_locks", 00:06:02.594 "framework_wait_init", 00:06:02.594 "framework_start_init", 00:06:02.594 "virtio_blk_create_transport", 00:06:02.594 "virtio_blk_get_transports", 00:06:02.594 "vhost_controller_set_coalescing", 00:06:02.594 "vhost_get_controllers", 00:06:02.594 "vhost_delete_controller", 00:06:02.594 "vhost_create_blk_controller", 00:06:02.594 "vhost_scsi_controller_remove_target", 00:06:02.594 "vhost_scsi_controller_add_target", 00:06:02.594 "vhost_start_scsi_controller", 00:06:02.594 "vhost_create_scsi_controller", 00:06:02.594 "ublk_recover_disk", 00:06:02.594 "ublk_get_disks", 00:06:02.594 "ublk_stop_disk", 00:06:02.594 "ublk_start_disk", 00:06:02.594 "ublk_destroy_target", 00:06:02.594 "ublk_create_target", 00:06:02.594 "nbd_get_disks", 00:06:02.594 "nbd_stop_disk", 00:06:02.594 "nbd_start_disk", 00:06:02.594 "env_dpdk_get_mem_stats", 00:06:02.594 "nvmf_subsystem_get_listeners", 00:06:02.594 "nvmf_subsystem_get_qpairs", 00:06:02.594 "nvmf_subsystem_get_controllers", 00:06:02.594 "nvmf_get_stats", 00:06:02.594 "nvmf_get_transports", 00:06:02.594 "nvmf_create_transport", 00:06:02.594 "nvmf_get_targets", 00:06:02.594 "nvmf_delete_target", 00:06:02.594 "nvmf_create_target", 00:06:02.594 "nvmf_subsystem_allow_any_host", 00:06:02.594 "nvmf_subsystem_remove_host", 00:06:02.594 "nvmf_subsystem_add_host", 00:06:02.594 "nvmf_ns_remove_host", 00:06:02.594 "nvmf_ns_add_host", 00:06:02.594 "nvmf_subsystem_remove_ns", 00:06:02.594 "nvmf_subsystem_add_ns", 00:06:02.594 "nvmf_subsystem_listener_set_ana_state", 00:06:02.594 "nvmf_discovery_get_referrals", 00:06:02.594 "nvmf_discovery_remove_referral", 00:06:02.594 "nvmf_discovery_add_referral", 00:06:02.594 "nvmf_subsystem_remove_listener", 00:06:02.594 "nvmf_subsystem_add_listener", 00:06:02.594 "nvmf_delete_subsystem", 00:06:02.594 "nvmf_create_subsystem", 00:06:02.594 "nvmf_get_subsystems", 00:06:02.594 "nvmf_set_crdt", 00:06:02.594 "nvmf_set_config", 00:06:02.594 "nvmf_set_max_subsystems", 00:06:02.594 "iscsi_get_histogram", 00:06:02.594 "iscsi_enable_histogram", 00:06:02.594 "iscsi_set_options", 00:06:02.594 "iscsi_get_auth_groups", 00:06:02.594 "iscsi_auth_group_remove_secret", 00:06:02.594 "iscsi_auth_group_add_secret", 00:06:02.594 "iscsi_delete_auth_group", 00:06:02.594 "iscsi_create_auth_group", 00:06:02.594 "iscsi_set_discovery_auth", 00:06:02.594 "iscsi_get_options", 00:06:02.594 "iscsi_target_node_request_logout", 00:06:02.594 "iscsi_target_node_set_redirect", 00:06:02.594 "iscsi_target_node_set_auth", 00:06:02.594 "iscsi_target_node_add_lun", 00:06:02.594 "iscsi_get_stats", 00:06:02.594 "iscsi_get_connections", 00:06:02.594 "iscsi_portal_group_set_auth", 00:06:02.594 "iscsi_start_portal_group", 00:06:02.594 "iscsi_delete_portal_group", 00:06:02.594 "iscsi_create_portal_group", 00:06:02.594 "iscsi_get_portal_groups", 00:06:02.594 "iscsi_delete_target_node", 00:06:02.594 "iscsi_target_node_remove_pg_ig_maps", 00:06:02.594 "iscsi_target_node_add_pg_ig_maps", 00:06:02.594 "iscsi_create_target_node", 00:06:02.594 "iscsi_get_target_nodes", 00:06:02.594 "iscsi_delete_initiator_group", 00:06:02.594 "iscsi_initiator_group_remove_initiators", 00:06:02.594 "iscsi_initiator_group_add_initiators", 00:06:02.594 "iscsi_create_initiator_group", 00:06:02.594 "iscsi_get_initiator_groups", 00:06:02.594 "keyring_file_remove_key", 00:06:02.594 "keyring_file_add_key", 00:06:02.594 "vfu_virtio_create_scsi_endpoint", 00:06:02.594 "vfu_virtio_scsi_remove_target", 00:06:02.594 "vfu_virtio_scsi_add_target", 00:06:02.594 "vfu_virtio_create_blk_endpoint", 00:06:02.594 "vfu_virtio_delete_endpoint", 00:06:02.594 "iaa_scan_accel_module", 00:06:02.594 "dsa_scan_accel_module", 00:06:02.594 "ioat_scan_accel_module", 00:06:02.594 "accel_error_inject_error", 00:06:02.594 "bdev_iscsi_delete", 00:06:02.594 "bdev_iscsi_create", 00:06:02.594 "bdev_iscsi_set_options", 00:06:02.594 "bdev_virtio_attach_controller", 00:06:02.594 "bdev_virtio_scsi_get_devices", 00:06:02.594 "bdev_virtio_detach_controller", 00:06:02.594 "bdev_virtio_blk_set_hotplug", 00:06:02.594 "bdev_ftl_set_property", 00:06:02.594 "bdev_ftl_get_properties", 00:06:02.594 "bdev_ftl_get_stats", 00:06:02.594 "bdev_ftl_unmap", 00:06:02.594 "bdev_ftl_unload", 00:06:02.594 "bdev_ftl_delete", 00:06:02.594 "bdev_ftl_load", 00:06:02.594 "bdev_ftl_create", 00:06:02.594 "bdev_aio_delete", 00:06:02.594 "bdev_aio_rescan", 00:06:02.594 "bdev_aio_create", 00:06:02.594 "blobfs_create", 00:06:02.594 "blobfs_detect", 00:06:02.594 "blobfs_set_cache_size", 00:06:02.594 "bdev_zone_block_delete", 00:06:02.594 "bdev_zone_block_create", 00:06:02.595 "bdev_delay_delete", 00:06:02.595 "bdev_delay_create", 00:06:02.595 "bdev_delay_update_latency", 00:06:02.595 "bdev_split_delete", 00:06:02.595 "bdev_split_create", 00:06:02.595 "bdev_error_inject_error", 00:06:02.595 "bdev_error_delete", 00:06:02.595 "bdev_error_create", 00:06:02.595 "bdev_raid_set_options", 00:06:02.595 "bdev_raid_remove_base_bdev", 00:06:02.595 "bdev_raid_add_base_bdev", 00:06:02.595 "bdev_raid_delete", 00:06:02.595 "bdev_raid_create", 00:06:02.595 "bdev_raid_get_bdevs", 00:06:02.595 "bdev_lvol_grow_lvstore", 00:06:02.595 "bdev_lvol_get_lvols", 00:06:02.595 "bdev_lvol_get_lvstores", 00:06:02.595 "bdev_lvol_delete", 00:06:02.595 "bdev_lvol_set_read_only", 00:06:02.595 "bdev_lvol_resize", 00:06:02.595 "bdev_lvol_decouple_parent", 00:06:02.595 "bdev_lvol_inflate", 00:06:02.595 "bdev_lvol_rename", 00:06:02.595 "bdev_lvol_clone_bdev", 00:06:02.595 "bdev_lvol_clone", 00:06:02.595 "bdev_lvol_snapshot", 00:06:02.595 "bdev_lvol_create", 00:06:02.595 "bdev_lvol_delete_lvstore", 00:06:02.595 "bdev_lvol_rename_lvstore", 00:06:02.595 "bdev_lvol_create_lvstore", 00:06:02.595 "bdev_passthru_delete", 00:06:02.595 "bdev_passthru_create", 00:06:02.595 "bdev_nvme_cuse_unregister", 00:06:02.595 "bdev_nvme_cuse_register", 00:06:02.595 "bdev_opal_new_user", 00:06:02.595 "bdev_opal_set_lock_state", 00:06:02.595 "bdev_opal_delete", 00:06:02.595 "bdev_opal_get_info", 00:06:02.595 "bdev_opal_create", 00:06:02.595 "bdev_nvme_opal_revert", 00:06:02.595 "bdev_nvme_opal_init", 00:06:02.595 "bdev_nvme_send_cmd", 00:06:02.595 "bdev_nvme_get_path_iostat", 00:06:02.595 "bdev_nvme_get_mdns_discovery_info", 00:06:02.595 "bdev_nvme_stop_mdns_discovery", 00:06:02.595 "bdev_nvme_start_mdns_discovery", 00:06:02.595 "bdev_nvme_set_multipath_policy", 00:06:02.595 "bdev_nvme_set_preferred_path", 00:06:02.595 "bdev_nvme_get_io_paths", 00:06:02.595 "bdev_nvme_remove_error_injection", 00:06:02.595 "bdev_nvme_add_error_injection", 00:06:02.595 "bdev_nvme_get_discovery_info", 00:06:02.595 "bdev_nvme_stop_discovery", 00:06:02.595 "bdev_nvme_start_discovery", 00:06:02.595 "bdev_nvme_get_controller_health_info", 00:06:02.595 "bdev_nvme_disable_controller", 00:06:02.595 "bdev_nvme_enable_controller", 00:06:02.595 "bdev_nvme_reset_controller", 00:06:02.595 "bdev_nvme_get_transport_statistics", 00:06:02.595 "bdev_nvme_apply_firmware", 00:06:02.595 "bdev_nvme_detach_controller", 00:06:02.595 "bdev_nvme_get_controllers", 00:06:02.595 "bdev_nvme_attach_controller", 00:06:02.595 "bdev_nvme_set_hotplug", 00:06:02.595 "bdev_nvme_set_options", 00:06:02.595 "bdev_null_resize", 00:06:02.595 "bdev_null_delete", 00:06:02.595 "bdev_null_create", 00:06:02.595 "bdev_malloc_delete", 00:06:02.595 "bdev_malloc_create" 00:06:02.595 ] 00:06:02.595 02:46:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.595 02:46:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:02.595 02:46:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3481827 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3481827 ']' 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3481827 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.595 02:46:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3481827 00:06:02.853 02:46:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.853 02:46:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.853 02:46:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3481827' 00:06:02.853 killing process with pid 3481827 00:06:02.853 02:46:53 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3481827 00:06:02.853 02:46:53 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3481827 00:06:03.120 00:06:03.120 real 0m1.001s 00:06:03.120 user 0m1.641s 00:06:03.120 sys 0m0.490s 00:06:03.120 02:46:53 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.120 02:46:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.120 ************************************ 00:06:03.120 END TEST spdkcli_tcp 00:06:03.120 ************************************ 00:06:03.120 02:46:53 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.120 02:46:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.120 02:46:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.120 02:46:53 -- common/autotest_common.sh@10 -- # set +x 00:06:03.120 ************************************ 00:06:03.120 START TEST dpdk_mem_utility 00:06:03.120 ************************************ 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.120 * Looking for test storage... 00:06:03.120 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:03.120 02:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.120 02:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3482147 00:06:03.120 02:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3482147 00:06:03.120 02:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3482147 ']' 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.120 02:46:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.380 [2024-05-13 02:46:53.935581] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:03.380 [2024-05-13 02:46:53.935665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482147 ] 00:06:03.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.380 [2024-05-13 02:46:53.972868] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.380 [2024-05-13 02:46:54.005433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.380 [2024-05-13 02:46:54.043955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.638 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.638 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:03.638 02:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.638 02:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.638 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.638 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.638 { 00:06:03.638 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.638 } 00:06:03.638 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.638 02:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.638 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:03.638 1 heaps totaling size 814.000000 MiB 00:06:03.638 size: 814.000000 MiB heap id: 0 00:06:03.638 end heaps---------- 00:06:03.638 8 mempools totaling size 598.116089 MiB 00:06:03.638 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.638 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.638 size: 84.521057 MiB name: bdev_io_3482147 00:06:03.638 size: 51.011292 MiB name: evtpool_3482147 00:06:03.638 size: 50.003479 MiB name: msgpool_3482147 00:06:03.638 size: 21.763794 MiB name: PDU_Pool 00:06:03.638 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.638 size: 0.026123 MiB name: Session_Pool 00:06:03.638 end mempools------- 00:06:03.638 6 memzones totaling size 4.142822 MiB 00:06:03.638 size: 1.000366 MiB name: RG_ring_0_3482147 00:06:03.638 size: 1.000366 MiB name: RG_ring_1_3482147 00:06:03.638 size: 1.000366 MiB name: RG_ring_4_3482147 00:06:03.638 size: 1.000366 MiB name: RG_ring_5_3482147 00:06:03.638 size: 0.125366 MiB name: RG_ring_2_3482147 00:06:03.638 size: 0.015991 MiB name: RG_ring_3_3482147 00:06:03.638 end memzones------- 00:06:03.638 02:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.638 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:03.638 list of free elements. size: 12.519348 MiB 00:06:03.638 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:03.638 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:03.638 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:03.638 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:03.638 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:03.638 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:03.638 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:03.638 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:03.638 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:03.638 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:03.638 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:03.638 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:03.638 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:03.638 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:03.638 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:03.638 list of standard malloc elements. size: 199.218079 MiB 00:06:03.638 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:03.638 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:03.638 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:03.638 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:03.638 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:03.638 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:03.638 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:03.638 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:03.638 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:03.638 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:03.638 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:03.639 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:03.639 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:03.639 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:03.639 list of memzone associated elements. size: 602.262573 MiB 00:06:03.639 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:03.639 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.639 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:03.639 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.639 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:03.639 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3482147_0 00:06:03.639 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:03.639 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3482147_0 00:06:03.639 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:03.639 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3482147_0 00:06:03.639 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:03.639 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.639 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:03.639 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.639 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:03.639 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3482147 00:06:03.639 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:03.639 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3482147 00:06:03.639 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:03.639 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3482147 00:06:03.639 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:03.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.639 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:03.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.639 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:03.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.639 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:03.639 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.639 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:03.639 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3482147 00:06:03.639 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:03.639 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3482147 00:06:03.639 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:03.639 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3482147 00:06:03.639 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:03.639 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3482147 00:06:03.639 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:03.639 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3482147 00:06:03.639 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:03.639 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.639 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:03.639 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.639 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:03.639 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.639 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:03.639 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3482147 00:06:03.639 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:03.639 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.639 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:03.639 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.639 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:03.639 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3482147 00:06:03.639 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:03.639 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.639 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:03.639 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3482147 00:06:03.639 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:03.639 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3482147 00:06:03.639 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:03.639 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.639 02:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.639 02:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3482147 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3482147 ']' 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3482147 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3482147 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3482147' 00:06:03.639 killing process with pid 3482147 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3482147 00:06:03.639 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3482147 00:06:03.896 00:06:03.896 real 0m0.891s 00:06:03.896 user 0m0.821s 00:06:03.896 sys 0m0.414s 00:06:03.896 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.896 02:46:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.896 ************************************ 00:06:03.896 END TEST dpdk_mem_utility 00:06:03.896 ************************************ 00:06:04.153 02:46:54 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:04.153 02:46:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.153 02:46:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.153 02:46:54 -- common/autotest_common.sh@10 -- # set +x 00:06:04.153 ************************************ 00:06:04.153 START TEST event 00:06:04.153 ************************************ 00:06:04.153 02:46:54 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:04.153 * Looking for test storage... 00:06:04.153 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:04.153 02:46:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:04.153 02:46:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.153 02:46:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.153 02:46:54 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:04.153 02:46:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.153 02:46:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.153 ************************************ 00:06:04.153 START TEST event_perf 00:06:04.153 ************************************ 00:06:04.153 02:46:54 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.153 Running I/O for 1 seconds...[2024-05-13 02:46:54.948671] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:04.153 [2024-05-13 02:46:54.948751] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482460 ] 00:06:04.412 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.412 [2024-05-13 02:46:54.988640] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.412 [2024-05-13 02:46:55.020390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.412 [2024-05-13 02:46:55.062025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.412 [2024-05-13 02:46:55.062123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.412 [2024-05-13 02:46:55.062185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.412 [2024-05-13 02:46:55.062187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.348 Running I/O for 1 seconds... 00:06:05.348 lcore 0: 201710 00:06:05.348 lcore 1: 201711 00:06:05.348 lcore 2: 201711 00:06:05.348 lcore 3: 201711 00:06:05.348 done. 00:06:05.348 00:06:05.348 real 0m1.188s 00:06:05.348 user 0m4.095s 00:06:05.348 sys 0m0.091s 00:06:05.348 02:46:56 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.348 02:46:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 ************************************ 00:06:05.348 END TEST event_perf 00:06:05.348 ************************************ 00:06:05.606 02:46:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.606 02:46:56 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:05.606 02:46:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.606 02:46:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.606 ************************************ 00:06:05.606 START TEST event_reactor 00:06:05.606 ************************************ 00:06:05.606 02:46:56 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.606 [2024-05-13 02:46:56.227238] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:05.606 [2024-05-13 02:46:56.227328] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482648 ] 00:06:05.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.606 [2024-05-13 02:46:56.268618] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.606 [2024-05-13 02:46:56.301404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.606 [2024-05-13 02:46:56.340838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.980 test_start 00:06:06.980 oneshot 00:06:06.980 tick 100 00:06:06.980 tick 100 00:06:06.980 tick 250 00:06:06.980 tick 100 00:06:06.980 tick 100 00:06:06.980 tick 100 00:06:06.980 tick 250 00:06:06.980 tick 500 00:06:06.980 tick 100 00:06:06.980 tick 100 00:06:06.980 tick 250 00:06:06.980 tick 100 00:06:06.980 tick 100 00:06:06.980 test_end 00:06:06.980 00:06:06.980 real 0m1.185s 00:06:06.980 user 0m1.093s 00:06:06.980 sys 0m0.089s 00:06:06.980 02:46:57 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.980 02:46:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.980 ************************************ 00:06:06.980 END TEST event_reactor 00:06:06.980 ************************************ 00:06:06.980 02:46:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.980 02:46:57 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:06.980 02:46:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.980 02:46:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.980 ************************************ 00:06:06.980 START TEST event_reactor_perf 00:06:06.980 ************************************ 00:06:06.980 02:46:57 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.980 [2024-05-13 02:46:57.497865] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:06.980 [2024-05-13 02:46:57.498000] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482806 ] 00:06:06.980 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.980 [2024-05-13 02:46:57.539666] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.980 [2024-05-13 02:46:57.569254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.980 [2024-05-13 02:46:57.608507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.916 test_start 00:06:07.916 test_end 00:06:07.916 Performance: 975064 events per second 00:06:07.916 00:06:07.916 real 0m1.185s 00:06:07.916 user 0m1.092s 00:06:07.916 sys 0m0.089s 00:06:07.916 02:46:58 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.916 02:46:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.916 ************************************ 00:06:07.916 END TEST event_reactor_perf 00:06:07.916 ************************************ 00:06:07.916 02:46:58 event -- event/event.sh@49 -- # uname -s 00:06:07.916 02:46:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.916 02:46:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.916 02:46:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.916 02:46:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.916 02:46:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.175 ************************************ 00:06:08.175 START TEST event_scheduler 00:06:08.175 ************************************ 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:08.175 * Looking for test storage... 00:06:08.175 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:08.175 02:46:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.175 02:46:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3483107 00:06:08.175 02:46:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.175 02:46:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.175 02:46:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3483107 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3483107 ']' 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.175 02:46:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.175 [2024-05-13 02:46:58.891823] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:08.175 [2024-05-13 02:46:58.891908] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483107 ] 00:06:08.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.175 [2024-05-13 02:46:58.931909] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.175 [2024-05-13 02:46:58.960052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.434 [2024-05-13 02:46:59.002563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.435 [2024-05-13 02:46:59.002580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.435 [2024-05-13 02:46:59.002665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.435 [2024-05-13 02:46:59.002667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:08.435 02:46:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.435 POWER: Env isn't set yet! 00:06:08.435 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:08.435 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.435 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.435 POWER: Attempting to initialise PSTAT power management... 00:06:08.435 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:08.435 POWER: Initialized successfully for lcore 0 power management 00:06:08.435 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:08.435 POWER: Initialized successfully for lcore 1 power management 00:06:08.435 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:08.435 POWER: Initialized successfully for lcore 2 power management 00:06:08.435 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:08.435 POWER: Initialized successfully for lcore 3 power management 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.435 02:46:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.435 [2024-05-13 02:46:59.162002] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.435 02:46:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.435 02:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.435 ************************************ 00:06:08.435 START TEST scheduler_create_thread 00:06:08.435 ************************************ 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.435 2 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.435 3 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.435 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 4 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 5 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 6 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 7 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 8 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 9 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 10 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.694 02:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 02:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.632 02:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.632 02:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.632 02:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.011 02:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.011 02:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:11.011 02:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:11.011 02:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.011 02:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.947 02:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.947 00:06:11.947 real 0m3.383s 00:06:11.947 user 0m0.022s 00:06:11.947 sys 0m0.009s 00:06:11.947 02:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.947 02:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.947 ************************************ 00:06:11.947 END TEST scheduler_create_thread 00:06:11.947 ************************************ 00:06:11.947 02:47:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.947 02:47:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3483107 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3483107 ']' 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3483107 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3483107 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3483107' 00:06:11.947 killing process with pid 3483107 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3483107 00:06:11.947 02:47:02 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3483107 00:06:12.206 [2024-05-13 02:47:02.974041] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.466 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:12.466 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:12.466 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:12.466 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:12.466 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:12.466 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:12.466 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:12.466 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:12.466 00:06:12.466 real 0m4.429s 00:06:12.466 user 0m7.793s 00:06:12.466 sys 0m0.400s 00:06:12.466 02:47:03 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.466 02:47:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.466 ************************************ 00:06:12.466 END TEST event_scheduler 00:06:12.466 ************************************ 00:06:12.466 02:47:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.466 02:47:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.466 02:47:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.466 02:47:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.466 02:47:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.725 ************************************ 00:06:12.725 START TEST app_repeat 00:06:12.725 ************************************ 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3483949 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3483949' 00:06:12.725 Process app_repeat pid: 3483949 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.725 spdk_app_start Round 0 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3483949 /var/tmp/spdk-nbd.sock 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3483949 ']' 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.725 [2024-05-13 02:47:03.301122] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:12.725 [2024-05-13 02:47:03.301201] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483949 ] 00:06:12.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.725 [2024-05-13 02:47:03.340724] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.725 [2024-05-13 02:47:03.373323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.725 [2024-05-13 02:47:03.414002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.725 [2024-05-13 02:47:03.414006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.725 02:47:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:12.725 02:47:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.984 Malloc0 00:06:12.984 02:47:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.243 Malloc1 00:06:13.243 02:47:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.243 02:47:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.243 /dev/nbd0 00:06:13.243 02:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.243 02:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.243 02:47:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.502 1+0 records in 00:06:13.502 1+0 records out 00:06:13.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242536 s, 16.9 MB/s 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.502 /dev/nbd1 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.502 1+0 records in 00:06:13.502 1+0 records out 00:06:13.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251908 s, 16.3 MB/s 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:13.502 02:47:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.502 02:47:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.761 { 00:06:13.761 "nbd_device": "/dev/nbd0", 00:06:13.761 "bdev_name": "Malloc0" 00:06:13.761 }, 00:06:13.761 { 00:06:13.761 "nbd_device": "/dev/nbd1", 00:06:13.761 "bdev_name": "Malloc1" 00:06:13.761 } 00:06:13.761 ]' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.761 { 00:06:13.761 "nbd_device": "/dev/nbd0", 00:06:13.761 "bdev_name": "Malloc0" 00:06:13.761 }, 00:06:13.761 { 00:06:13.761 "nbd_device": "/dev/nbd1", 00:06:13.761 "bdev_name": "Malloc1" 00:06:13.761 } 00:06:13.761 ]' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.761 /dev/nbd1' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.761 /dev/nbd1' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.761 256+0 records in 00:06:13.761 256+0 records out 00:06:13.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109326 s, 95.9 MB/s 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.761 02:47:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.762 256+0 records in 00:06:13.762 256+0 records out 00:06:13.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201425 s, 52.1 MB/s 00:06:13.762 02:47:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.762 02:47:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.021 256+0 records in 00:06:14.021 256+0 records out 00:06:14.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218014 s, 48.1 MB/s 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.021 02:47:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.280 02:47:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.539 02:47:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.539 02:47:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.798 02:47:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.798 [2024-05-13 02:47:05.570822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.058 [2024-05-13 02:47:05.606751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.058 [2024-05-13 02:47:05.606754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.058 [2024-05-13 02:47:05.647745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.058 [2024-05-13 02:47:05.647786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.346 02:47:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.346 02:47:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.346 spdk_app_start Round 1 00:06:18.346 02:47:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3483949 /var/tmp/spdk-nbd.sock 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3483949 ']' 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.346 02:47:08 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:18.346 02:47:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.346 Malloc0 00:06:18.346 02:47:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.346 Malloc1 00:06:18.346 02:47:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.346 02:47:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.347 02:47:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.347 02:47:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.347 02:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.347 02:47:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.347 02:47:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.347 /dev/nbd0 00:06:18.347 02:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.347 02:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.347 1+0 records in 00:06:18.347 1+0 records out 00:06:18.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225363 s, 18.2 MB/s 00:06:18.347 02:47:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.606 /dev/nbd1 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.606 1+0 records in 00:06:18.606 1+0 records out 00:06:18.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278729 s, 14.7 MB/s 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:18.606 02:47:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.606 02:47:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.865 02:47:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.866 { 00:06:18.866 "nbd_device": "/dev/nbd0", 00:06:18.866 "bdev_name": "Malloc0" 00:06:18.866 }, 00:06:18.866 { 00:06:18.866 "nbd_device": "/dev/nbd1", 00:06:18.866 "bdev_name": "Malloc1" 00:06:18.866 } 00:06:18.866 ]' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.866 { 00:06:18.866 "nbd_device": "/dev/nbd0", 00:06:18.866 "bdev_name": "Malloc0" 00:06:18.866 }, 00:06:18.866 { 00:06:18.866 "nbd_device": "/dev/nbd1", 00:06:18.866 "bdev_name": "Malloc1" 00:06:18.866 } 00:06:18.866 ]' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.866 /dev/nbd1' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.866 /dev/nbd1' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.866 256+0 records in 00:06:18.866 256+0 records out 00:06:18.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110742 s, 94.7 MB/s 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.866 256+0 records in 00:06:18.866 256+0 records out 00:06:18.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020079 s, 52.2 MB/s 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.866 256+0 records in 00:06:18.866 256+0 records out 00:06:18.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220925 s, 47.5 MB/s 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.866 02:47:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.125 02:47:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.384 02:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.643 02:47:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.643 02:47:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.903 02:47:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.903 [2024-05-13 02:47:10.656336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.903 [2024-05-13 02:47:10.691715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.903 [2024-05-13 02:47:10.691718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.162 [2024-05-13 02:47:10.733518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.162 [2024-05-13 02:47:10.733560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.768 02:47:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.768 02:47:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.768 spdk_app_start Round 2 00:06:22.768 02:47:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3483949 /var/tmp/spdk-nbd.sock 00:06:22.768 02:47:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3483949 ']' 00:06:22.768 02:47:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.768 02:47:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.768 02:47:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.768 02:47:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.768 02:47:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.040 02:47:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.040 02:47:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:23.040 02:47:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.040 Malloc0 00:06:23.040 02:47:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.302 Malloc1 00:06:23.302 02:47:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.302 02:47:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.560 /dev/nbd0 00:06:23.560 02:47:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.560 02:47:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.560 1+0 records in 00:06:23.560 1+0 records out 00:06:23.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238183 s, 17.2 MB/s 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.560 02:47:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.560 02:47:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.560 02:47:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.560 02:47:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.819 /dev/nbd1 00:06:23.819 02:47:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.819 02:47:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.819 1+0 records in 00:06:23.819 1+0 records out 00:06:23.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173742 s, 23.6 MB/s 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.819 02:47:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.819 02:47:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.819 02:47:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.819 02:47:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.820 02:47:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.820 02:47:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.820 02:47:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.820 { 00:06:23.820 "nbd_device": "/dev/nbd0", 00:06:23.820 "bdev_name": "Malloc0" 00:06:23.820 }, 00:06:23.820 { 00:06:23.820 "nbd_device": "/dev/nbd1", 00:06:23.820 "bdev_name": "Malloc1" 00:06:23.820 } 00:06:23.820 ]' 00:06:23.820 02:47:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.820 { 00:06:23.820 "nbd_device": "/dev/nbd0", 00:06:23.820 "bdev_name": "Malloc0" 00:06:23.820 }, 00:06:23.820 { 00:06:23.820 "nbd_device": "/dev/nbd1", 00:06:23.820 "bdev_name": "Malloc1" 00:06:23.820 } 00:06:23.820 ]' 00:06:23.820 02:47:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.079 /dev/nbd1' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.079 /dev/nbd1' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.079 256+0 records in 00:06:24.079 256+0 records out 00:06:24.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408205 s, 257 MB/s 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.079 256+0 records in 00:06:24.079 256+0 records out 00:06:24.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202304 s, 51.8 MB/s 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.079 256+0 records in 00:06:24.079 256+0 records out 00:06:24.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021084 s, 49.7 MB/s 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.079 02:47:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.337 02:47:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.338 02:47:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.338 02:47:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.338 02:47:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.338 02:47:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.338 02:47:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.338 02:47:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.338 02:47:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.596 02:47:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.596 02:47:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.855 02:47:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.114 [2024-05-13 02:47:15.721652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.114 [2024-05-13 02:47:15.758221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.114 [2024-05-13 02:47:15.758225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.114 [2024-05-13 02:47:15.799129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.114 [2024-05-13 02:47:15.799167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.398 02:47:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3483949 /var/tmp/spdk-nbd.sock 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3483949 ']' 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:28.398 02:47:18 event.app_repeat -- event/event.sh@39 -- # killprocess 3483949 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3483949 ']' 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3483949 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3483949 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3483949' 00:06:28.398 killing process with pid 3483949 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3483949 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3483949 00:06:28.398 spdk_app_start is called in Round 0. 00:06:28.398 Shutdown signal received, stop current app iteration 00:06:28.398 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 reinitialization... 00:06:28.398 spdk_app_start is called in Round 1. 00:06:28.398 Shutdown signal received, stop current app iteration 00:06:28.398 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 reinitialization... 00:06:28.398 spdk_app_start is called in Round 2. 00:06:28.398 Shutdown signal received, stop current app iteration 00:06:28.398 Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 reinitialization... 00:06:28.398 spdk_app_start is called in Round 3. 00:06:28.398 Shutdown signal received, stop current app iteration 00:06:28.398 02:47:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:28.398 02:47:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:28.398 00:06:28.398 real 0m15.644s 00:06:28.398 user 0m33.204s 00:06:28.398 sys 0m3.131s 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.398 02:47:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.398 ************************************ 00:06:28.398 END TEST app_repeat 00:06:28.398 ************************************ 00:06:28.398 02:47:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:28.398 02:47:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:28.398 02:47:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.398 02:47:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.398 02:47:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.398 ************************************ 00:06:28.398 START TEST cpu_locks 00:06:28.398 ************************************ 00:06:28.398 02:47:19 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:28.398 * Looking for test storage... 00:06:28.398 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:28.398 02:47:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:28.398 02:47:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:28.398 02:47:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:28.398 02:47:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:28.398 02:47:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.398 02:47:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.398 02:47:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.398 ************************************ 00:06:28.398 START TEST default_locks 00:06:28.398 ************************************ 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3486853 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3486853 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3486853 ']' 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.398 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.398 [2024-05-13 02:47:19.170894] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:28.398 [2024-05-13 02:47:19.170954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3486853 ] 00:06:28.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.657 [2024-05-13 02:47:19.208389] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.657 [2024-05-13 02:47:19.240407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.657 [2024-05-13 02:47:19.280676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.657 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.657 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:28.657 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3486853 00:06:28.657 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3486853 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.916 lslocks: write error 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3486853 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3486853 ']' 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3486853 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:28.916 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3486853 00:06:29.174 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.174 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.174 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3486853' 00:06:29.174 killing process with pid 3486853 00:06:29.174 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3486853 00:06:29.174 02:47:19 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3486853 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3486853 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3486853 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3486853 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3486853 ']' 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.434 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3486853) - No such process 00:06:29.434 ERROR: process (pid: 3486853) is no longer running 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.434 00:06:29.434 real 0m0.873s 00:06:29.434 user 0m0.809s 00:06:29.434 sys 0m0.454s 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.434 02:47:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.434 ************************************ 00:06:29.434 END TEST default_locks 00:06:29.434 ************************************ 00:06:29.434 02:47:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.434 02:47:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.434 02:47:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.434 02:47:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.434 ************************************ 00:06:29.434 START TEST default_locks_via_rpc 00:06:29.434 ************************************ 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3487139 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3487139 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3487139 ']' 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.434 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.434 [2024-05-13 02:47:20.131217] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:29.434 [2024-05-13 02:47:20.131293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487139 ] 00:06:29.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.434 [2024-05-13 02:47:20.167625] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.434 [2024-05-13 02:47:20.199266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.693 [2024-05-13 02:47:20.237829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3487139 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3487139 00:06:29.693 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3487139 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3487139 ']' 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3487139 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487139 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487139' 00:06:29.953 killing process with pid 3487139 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3487139 00:06:29.953 02:47:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3487139 00:06:30.522 00:06:30.522 real 0m0.928s 00:06:30.522 user 0m0.861s 00:06:30.522 sys 0m0.467s 00:06:30.522 02:47:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.522 02:47:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.522 ************************************ 00:06:30.522 END TEST default_locks_via_rpc 00:06:30.522 ************************************ 00:06:30.522 02:47:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.522 02:47:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.522 02:47:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.522 02:47:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.522 ************************************ 00:06:30.522 START TEST non_locking_app_on_locked_coremask 00:06:30.522 ************************************ 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3487387 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3487387 /var/tmp/spdk.sock 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3487387 ']' 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.522 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.522 [2024-05-13 02:47:21.151152] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:30.522 [2024-05-13 02:47:21.151233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487387 ] 00:06:30.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.522 [2024-05-13 02:47:21.188284] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.522 [2024-05-13 02:47:21.219950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.522 [2024-05-13 02:47:21.259845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3487438 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3487438 /var/tmp/spdk2.sock 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3487438 ']' 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.781 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.782 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.782 02:47:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.782 [2024-05-13 02:47:21.463240] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:30.782 [2024-05-13 02:47:21.463327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487438 ] 00:06:30.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.782 [2024-05-13 02:47:21.503834] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.782 [2024-05-13 02:47:21.554623] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.782 [2024-05-13 02:47:21.554643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.041 [2024-05-13 02:47:21.629010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.606 02:47:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.606 02:47:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:31.606 02:47:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3487387 00:06:31.606 02:47:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3487387 00:06:31.606 02:47:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.543 lslocks: write error 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3487387 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3487387 ']' 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3487387 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487387 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487387' 00:06:32.543 killing process with pid 3487387 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3487387 00:06:32.543 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3487387 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3487438 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3487438 ']' 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3487438 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487438 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487438' 00:06:33.112 killing process with pid 3487438 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3487438 00:06:33.112 02:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3487438 00:06:33.372 00:06:33.372 real 0m2.989s 00:06:33.372 user 0m3.040s 00:06:33.372 sys 0m1.169s 00:06:33.372 02:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.372 02:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.372 ************************************ 00:06:33.372 END TEST non_locking_app_on_locked_coremask 00:06:33.372 ************************************ 00:06:33.372 02:47:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.372 02:47:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.372 02:47:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.372 02:47:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.632 ************************************ 00:06:33.632 START TEST locking_app_on_unlocked_coremask 00:06:33.632 ************************************ 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3487948 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3487948 /var/tmp/spdk.sock 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3487948 ']' 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.632 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.632 [2024-05-13 02:47:24.231779] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:33.632 [2024-05-13 02:47:24.231865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487948 ] 00:06:33.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.632 [2024-05-13 02:47:24.268754] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.632 [2024-05-13 02:47:24.302455] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.632 [2024-05-13 02:47:24.302478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.632 [2024-05-13 02:47:24.343464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3488011 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3488011 /var/tmp/spdk2.sock 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488011 ']' 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.892 02:47:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.892 [2024-05-13 02:47:24.545585] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:33.892 [2024-05-13 02:47:24.545674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488011 ] 00:06:33.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.892 [2024-05-13 02:47:24.582854] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.892 [2024-05-13 02:47:24.634161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.151 [2024-05-13 02:47:24.712523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.720 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.720 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:34.720 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3488011 00:06:34.720 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3488011 00:06:34.720 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.288 lslocks: write error 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3487948 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3487948 ']' 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3487948 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3487948 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3487948' 00:06:35.288 killing process with pid 3487948 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3487948 00:06:35.288 02:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3487948 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3488011 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488011 ']' 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3488011 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488011 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488011' 00:06:35.857 killing process with pid 3488011 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3488011 00:06:35.857 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3488011 00:06:36.116 00:06:36.116 real 0m2.573s 00:06:36.116 user 0m2.642s 00:06:36.116 sys 0m0.952s 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.116 ************************************ 00:06:36.116 END TEST locking_app_on_unlocked_coremask 00:06:36.116 ************************************ 00:06:36.116 02:47:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:36.116 02:47:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.116 02:47:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.116 02:47:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.116 ************************************ 00:06:36.116 START TEST locking_app_on_locked_coremask 00:06:36.116 ************************************ 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3488327 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3488327 /var/tmp/spdk.sock 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488327 ']' 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.116 02:47:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.116 [2024-05-13 02:47:26.892708] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:36.116 [2024-05-13 02:47:26.892788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488327 ] 00:06:36.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.375 [2024-05-13 02:47:26.930558] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.375 [2024-05-13 02:47:26.962157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.375 [2024-05-13 02:47:27.001522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3488463 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3488463 /var/tmp/spdk2.sock 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3488463 /var/tmp/spdk2.sock 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3488463 /var/tmp/spdk2.sock 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488463 ']' 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:36.635 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.635 [2024-05-13 02:47:27.206138] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:36.635 [2024-05-13 02:47:27.206226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488463 ] 00:06:36.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.635 [2024-05-13 02:47:27.246834] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.635 [2024-05-13 02:47:27.301419] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3488327 has claimed it. 00:06:36.635 [2024-05-13 02:47:27.301449] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.203 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3488463) - No such process 00:06:37.203 ERROR: process (pid: 3488463) is no longer running 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3488327 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3488327 00:06:37.203 02:47:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.462 lslocks: write error 00:06:37.462 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3488327 00:06:37.462 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488327 ']' 00:06:37.462 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3488327 00:06:37.462 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.462 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.462 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488327 00:06:37.723 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.723 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.723 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488327' 00:06:37.723 killing process with pid 3488327 00:06:37.723 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3488327 00:06:37.723 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3488327 00:06:37.993 00:06:37.993 real 0m1.702s 00:06:37.993 user 0m1.779s 00:06:37.993 sys 0m0.650s 00:06:37.993 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.993 02:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.993 ************************************ 00:06:37.993 END TEST locking_app_on_locked_coremask 00:06:37.993 ************************************ 00:06:37.993 02:47:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.993 02:47:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.993 02:47:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.993 02:47:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.993 ************************************ 00:06:37.993 START TEST locking_overlapped_coremask 00:06:37.993 ************************************ 00:06:37.993 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:37.993 02:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3488735 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3488735 /var/tmp/spdk.sock 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488735 ']' 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.994 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.994 [2024-05-13 02:47:28.684190] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:37.994 [2024-05-13 02:47:28.684253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488735 ] 00:06:37.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.994 [2024-05-13 02:47:28.720924] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.994 [2024-05-13 02:47:28.752531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.994 [2024-05-13 02:47:28.795067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.994 [2024-05-13 02:47:28.795160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.994 [2024-05-13 02:47:28.795160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3488878 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3488878 /var/tmp/spdk2.sock 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3488878 /var/tmp/spdk2.sock 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3488878 /var/tmp/spdk2.sock 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3488878 ']' 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.252 02:47:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.252 [2024-05-13 02:47:29.007403] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:38.252 [2024-05-13 02:47:29.007466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488878 ] 00:06:38.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.252 [2024-05-13 02:47:29.046144] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.512 [2024-05-13 02:47:29.099623] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3488735 has claimed it. 00:06:38.512 [2024-05-13 02:47:29.099654] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.081 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3488878) - No such process 00:06:39.081 ERROR: process (pid: 3488878) is no longer running 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3488735 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3488735 ']' 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3488735 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488735 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488735' 00:06:39.081 killing process with pid 3488735 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3488735 00:06:39.081 02:47:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3488735 00:06:39.341 00:06:39.341 real 0m1.348s 00:06:39.341 user 0m3.657s 00:06:39.341 sys 0m0.407s 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.341 ************************************ 00:06:39.341 END TEST locking_overlapped_coremask 00:06:39.341 ************************************ 00:06:39.341 02:47:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.341 02:47:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.341 02:47:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.341 02:47:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.341 ************************************ 00:06:39.341 START TEST locking_overlapped_coremask_via_rpc 00:06:39.341 ************************************ 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3488969 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3488969 /var/tmp/spdk.sock 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3488969 ']' 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.341 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.341 [2024-05-13 02:47:30.124577] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:39.341 [2024-05-13 02:47:30.124660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488969 ] 00:06:39.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.601 [2024-05-13 02:47:30.163982] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.601 [2024-05-13 02:47:30.197209] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.601 [2024-05-13 02:47:30.197232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.601 [2024-05-13 02:47:30.238539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.601 [2024-05-13 02:47:30.238621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.601 [2024-05-13 02:47:30.238623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3489136 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3489136 /var/tmp/spdk2.sock 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3489136 ']' 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.861 02:47:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.861 [2024-05-13 02:47:30.459194] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:39.861 [2024-05-13 02:47:30.459278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489136 ] 00:06:39.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.861 [2024-05-13 02:47:30.499099] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.861 [2024-05-13 02:47:30.555232] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.861 [2024-05-13 02:47:30.555254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.861 [2024-05-13 02:47:30.635599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.861 [2024-05-13 02:47:30.639431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.861 [2024-05-13 02:47:30.639431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.797 [2024-05-13 02:47:31.312442] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3488969 has claimed it. 00:06:40.797 request: 00:06:40.797 { 00:06:40.797 "method": "framework_enable_cpumask_locks", 00:06:40.797 "req_id": 1 00:06:40.797 } 00:06:40.797 Got JSON-RPC error response 00:06:40.797 response: 00:06:40.797 { 00:06:40.797 "code": -32603, 00:06:40.797 "message": "Failed to claim CPU core: 2" 00:06:40.797 } 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3488969 /var/tmp/spdk.sock 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3488969 ']' 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3489136 /var/tmp/spdk2.sock 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3489136 ']' 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.797 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.056 00:06:41.056 real 0m1.591s 00:06:41.056 user 0m0.701s 00:06:41.056 sys 0m0.167s 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.056 02:47:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.056 ************************************ 00:06:41.056 END TEST locking_overlapped_coremask_via_rpc 00:06:41.056 ************************************ 00:06:41.056 02:47:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.056 02:47:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3488969 ]] 00:06:41.056 02:47:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3488969 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3488969 ']' 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3488969 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3488969 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3488969' 00:06:41.056 killing process with pid 3488969 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3488969 00:06:41.056 02:47:31 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3488969 00:06:41.315 02:47:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3489136 ]] 00:06:41.315 02:47:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3489136 00:06:41.315 02:47:32 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3489136 ']' 00:06:41.315 02:47:32 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3489136 00:06:41.315 02:47:32 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:41.315 02:47:32 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.315 02:47:32 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3489136 00:06:41.574 02:47:32 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:41.574 02:47:32 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:41.574 02:47:32 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3489136' 00:06:41.574 killing process with pid 3489136 00:06:41.574 02:47:32 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3489136 00:06:41.574 02:47:32 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3489136 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3488969 ]] 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3488969 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3488969 ']' 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3488969 00:06:41.833 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3488969) - No such process 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3488969 is not found' 00:06:41.833 Process with pid 3488969 is not found 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3489136 ]] 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3489136 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3489136 ']' 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3489136 00:06:41.833 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3489136) - No such process 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3489136 is not found' 00:06:41.833 Process with pid 3489136 is not found 00:06:41.833 02:47:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.833 00:06:41.833 real 0m13.454s 00:06:41.833 user 0m22.781s 00:06:41.833 sys 0m5.345s 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.833 02:47:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.833 ************************************ 00:06:41.833 END TEST cpu_locks 00:06:41.833 ************************************ 00:06:41.833 00:06:41.833 real 0m37.720s 00:06:41.833 user 1m10.279s 00:06:41.833 sys 0m9.575s 00:06:41.833 02:47:32 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.833 02:47:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.833 ************************************ 00:06:41.833 END TEST event 00:06:41.833 ************************************ 00:06:41.833 02:47:32 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:41.833 02:47:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.833 02:47:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.833 02:47:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.833 ************************************ 00:06:41.833 START TEST thread 00:06:41.833 ************************************ 00:06:41.833 02:47:32 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:42.093 * Looking for test storage... 00:06:42.093 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:42.093 02:47:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.093 02:47:32 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:42.093 02:47:32 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.093 02:47:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.093 ************************************ 00:06:42.093 START TEST thread_poller_perf 00:06:42.093 ************************************ 00:06:42.093 02:47:32 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.093 [2024-05-13 02:47:32.766468] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:42.093 [2024-05-13 02:47:32.766584] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489555 ] 00:06:42.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.093 [2024-05-13 02:47:32.808389] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.093 [2024-05-13 02:47:32.839689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.093 [2024-05-13 02:47:32.878215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.093 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.475 ====================================== 00:06:43.475 busy:2504608746 (cyc) 00:06:43.475 total_run_count: 881000 00:06:43.475 tsc_hz: 2500000000 (cyc) 00:06:43.475 ====================================== 00:06:43.475 poller_cost: 2842 (cyc), 1136 (nsec) 00:06:43.475 00:06:43.475 real 0m1.186s 00:06:43.475 user 0m1.092s 00:06:43.475 sys 0m0.090s 00:06:43.475 02:47:33 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.475 02:47:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.475 ************************************ 00:06:43.475 END TEST thread_poller_perf 00:06:43.475 ************************************ 00:06:43.475 02:47:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.475 02:47:33 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:43.475 02:47:33 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.475 02:47:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.476 ************************************ 00:06:43.476 START TEST thread_poller_perf 00:06:43.476 ************************************ 00:06:43.476 02:47:34 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.476 [2024-05-13 02:47:34.042988] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:43.476 [2024-05-13 02:47:34.043116] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489838 ] 00:06:43.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.476 [2024-05-13 02:47:34.085854] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.476 [2024-05-13 02:47:34.118356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.476 [2024-05-13 02:47:34.154765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.476 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.414 ====================================== 00:06:44.414 busy:2501468016 (cyc) 00:06:44.414 total_run_count: 13917000 00:06:44.414 tsc_hz: 2500000000 (cyc) 00:06:44.414 ====================================== 00:06:44.414 poller_cost: 179 (cyc), 71 (nsec) 00:06:44.414 00:06:44.414 real 0m1.185s 00:06:44.414 user 0m1.088s 00:06:44.414 sys 0m0.093s 00:06:44.414 02:47:35 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.414 02:47:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.414 ************************************ 00:06:44.414 END TEST thread_poller_perf 00:06:44.414 ************************************ 00:06:44.673 02:47:35 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:44.673 02:47:35 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:44.673 02:47:35 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.673 02:47:35 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.673 02:47:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.673 ************************************ 00:06:44.673 START TEST thread_spdk_lock 00:06:44.673 ************************************ 00:06:44.673 02:47:35 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:44.673 [2024-05-13 02:47:35.317595] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:44.673 [2024-05-13 02:47:35.317675] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490123 ] 00:06:44.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.673 [2024-05-13 02:47:35.359170] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.673 [2024-05-13 02:47:35.389847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.673 [2024-05-13 02:47:35.430565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.673 [2024-05-13 02:47:35.430568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.241 [2024-05-13 02:47:35.920547] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:45.241 [2024-05-13 02:47:35.920580] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:45.241 [2024-05-13 02:47:35.920591] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x13651c0 00:06:45.241 [2024-05-13 02:47:35.921415] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:45.241 [2024-05-13 02:47:35.921519] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:45.241 [2024-05-13 02:47:35.921538] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:45.241 Starting test contend 00:06:45.241 Worker Delay Wait us Hold us Total us 00:06:45.241 0 3 170045 185472 355517 00:06:45.241 1 5 82212 286585 368798 00:06:45.241 PASS test contend 00:06:45.241 Starting test hold_by_poller 00:06:45.241 PASS test hold_by_poller 00:06:45.241 Starting test hold_by_message 00:06:45.241 PASS test hold_by_message 00:06:45.241 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:45.241 100014 assertions passed 00:06:45.241 0 assertions failed 00:06:45.241 00:06:45.241 real 0m0.674s 00:06:45.241 user 0m1.066s 00:06:45.241 sys 0m0.096s 00:06:45.241 02:47:35 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.241 02:47:35 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:45.241 ************************************ 00:06:45.241 END TEST thread_spdk_lock 00:06:45.241 ************************************ 00:06:45.241 00:06:45.241 real 0m3.422s 00:06:45.241 user 0m3.373s 00:06:45.241 sys 0m0.541s 00:06:45.241 02:47:36 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.241 02:47:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.241 ************************************ 00:06:45.241 END TEST thread 00:06:45.241 ************************************ 00:06:45.500 02:47:36 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:45.500 02:47:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.500 02:47:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.500 02:47:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.500 ************************************ 00:06:45.500 START TEST accel 00:06:45.500 ************************************ 00:06:45.500 02:47:36 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:45.500 * Looking for test storage... 00:06:45.500 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:45.500 02:47:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:45.500 02:47:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:45.500 02:47:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.500 02:47:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3490201 00:06:45.500 02:47:36 accel -- accel/accel.sh@63 -- # waitforlisten 3490201 00:06:45.500 02:47:36 accel -- common/autotest_common.sh@827 -- # '[' -z 3490201 ']' 00:06:45.500 02:47:36 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.500 02:47:36 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.500 02:47:36 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.500 02:47:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:45.501 02:47:36 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.501 02:47:36 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.501 02:47:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.501 02:47:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.501 02:47:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.501 02:47:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.501 02:47:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.501 02:47:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.501 02:47:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:45.501 02:47:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:45.501 [2024-05-13 02:47:36.238507] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:45.501 [2024-05-13 02:47:36.238590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490201 ] 00:06:45.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.501 [2024-05-13 02:47:36.278921] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.759 [2024-05-13 02:47:36.307691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.759 [2024-05-13 02:47:36.347973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.759 02:47:36 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.759 02:47:36 accel -- common/autotest_common.sh@860 -- # return 0 00:06:45.759 02:47:36 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:45.759 02:47:36 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:45.759 02:47:36 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:45.759 02:47:36 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:45.759 02:47:36 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:45.759 02:47:36 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:45.759 02:47:36 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:45.759 02:47:36 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.759 02:47:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.759 02:47:36 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # IFS== 00:06:46.018 02:47:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:46.018 02:47:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:46.018 02:47:36 accel -- accel/accel.sh@75 -- # killprocess 3490201 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@946 -- # '[' -z 3490201 ']' 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@950 -- # kill -0 3490201 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@951 -- # uname 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3490201 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3490201' 00:06:46.018 killing process with pid 3490201 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@965 -- # kill 3490201 00:06:46.018 02:47:36 accel -- common/autotest_common.sh@970 -- # wait 3490201 00:06:46.277 02:47:36 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:46.277 02:47:36 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:46.277 02:47:36 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:46.277 02:47:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.277 02:47:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.277 02:47:36 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:46.277 02:47:36 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:46.277 02:47:36 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.277 02:47:36 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:46.277 02:47:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:46.277 02:47:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:46.277 02:47:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.277 02:47:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.277 ************************************ 00:06:46.277 START TEST accel_missing_filename 00:06:46.277 ************************************ 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.277 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:46.277 02:47:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:46.536 [2024-05-13 02:47:37.083097] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:46.536 [2024-05-13 02:47:37.083176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490492 ] 00:06:46.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.536 [2024-05-13 02:47:37.122082] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.536 [2024-05-13 02:47:37.153799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.536 [2024-05-13 02:47:37.192052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.536 [2024-05-13 02:47:37.231666] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.536 [2024-05-13 02:47:37.291623] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:46.796 A filename is required. 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.796 00:06:46.796 real 0m0.287s 00:06:46.796 user 0m0.176s 00:06:46.796 sys 0m0.134s 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.796 02:47:37 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:46.796 ************************************ 00:06:46.796 END TEST accel_missing_filename 00:06:46.796 ************************************ 00:06:46.796 02:47:37 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:46.796 02:47:37 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:46.796 02:47:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.796 02:47:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.796 ************************************ 00:06:46.796 START TEST accel_compress_verify 00:06:46.796 ************************************ 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.796 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:46.796 02:47:37 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:46.796 [2024-05-13 02:47:37.449690] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:46.796 [2024-05-13 02:47:37.449786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490513 ] 00:06:46.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.796 [2024-05-13 02:47:37.489783] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.796 [2024-05-13 02:47:37.520564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.796 [2024-05-13 02:47:37.558054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.796 [2024-05-13 02:47:37.597746] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.055 [2024-05-13 02:47:37.657422] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:47.055 00:06:47.055 Compression does not support the verify option, aborting. 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.055 00:06:47.055 real 0m0.287s 00:06:47.055 user 0m0.192s 00:06:47.055 sys 0m0.132s 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.055 02:47:37 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:47.055 ************************************ 00:06:47.055 END TEST accel_compress_verify 00:06:47.055 ************************************ 00:06:47.055 02:47:37 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:47.055 02:47:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:47.055 02:47:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.055 02:47:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.055 ************************************ 00:06:47.055 START TEST accel_wrong_workload 00:06:47.055 ************************************ 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:47.055 02:47:37 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:47.055 Unsupported workload type: foobar 00:06:47.055 [2024-05-13 02:47:37.817421] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:47.055 accel_perf options: 00:06:47.055 [-h help message] 00:06:47.055 [-q queue depth per core] 00:06:47.055 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.055 [-T number of threads per core 00:06:47.055 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.055 [-t time in seconds] 00:06:47.055 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.055 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.055 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.055 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.055 [-S for crc32c workload, use this seed value (default 0) 00:06:47.055 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.055 [-f for fill workload, use this BYTE value (default 255) 00:06:47.055 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.055 [-y verify result if this switch is on] 00:06:47.055 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.055 Can be used to spread operations across a wider range of memory. 00:06:47.055 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:47.056 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.056 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.056 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.056 00:06:47.056 real 0m0.023s 00:06:47.056 user 0m0.019s 00:06:47.056 sys 0m0.004s 00:06:47.056 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.056 02:47:37 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:47.056 ************************************ 00:06:47.056 END TEST accel_wrong_workload 00:06:47.056 ************************************ 00:06:47.056 Error: writing output failed: Broken pipe 00:06:47.056 02:47:37 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.056 02:47:37 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:47.056 02:47:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.056 02:47:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.314 ************************************ 00:06:47.314 START TEST accel_negative_buffers 00:06:47.314 ************************************ 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.314 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:47.314 02:47:37 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:47.314 -x option must be non-negative. 00:06:47.314 [2024-05-13 02:47:37.917178] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:47.314 accel_perf options: 00:06:47.314 [-h help message] 00:06:47.314 [-q queue depth per core] 00:06:47.314 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.314 [-T number of threads per core 00:06:47.314 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.314 [-t time in seconds] 00:06:47.314 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.314 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.314 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.314 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.314 [-S for crc32c workload, use this seed value (default 0) 00:06:47.315 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.315 [-f for fill workload, use this BYTE value (default 255) 00:06:47.315 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.315 [-y verify result if this switch is on] 00:06:47.315 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.315 Can be used to spread operations across a wider range of memory. 00:06:47.315 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:47.315 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.315 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.315 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.315 00:06:47.315 real 0m0.025s 00:06:47.315 user 0m0.010s 00:06:47.315 sys 0m0.014s 00:06:47.315 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.315 02:47:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:47.315 ************************************ 00:06:47.315 END TEST accel_negative_buffers 00:06:47.315 ************************************ 00:06:47.315 Error: writing output failed: Broken pipe 00:06:47.315 02:47:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:47.315 02:47:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:47.315 02:47:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.315 02:47:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.315 ************************************ 00:06:47.315 START TEST accel_crc32c 00:06:47.315 ************************************ 00:06:47.315 02:47:37 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.315 02:47:37 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:47.315 02:47:38 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:47.315 [2024-05-13 02:47:38.016047] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:47.315 [2024-05-13 02:47:38.016131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490711 ] 00:06:47.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.315 [2024-05-13 02:47:38.054795] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.315 [2024-05-13 02:47:38.086616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.577 [2024-05-13 02:47:38.125488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.577 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.578 02:47:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:48.554 02:47:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.554 00:06:48.554 real 0m1.290s 00:06:48.554 user 0m1.169s 00:06:48.554 sys 0m0.123s 00:06:48.554 02:47:39 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.554 02:47:39 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:48.554 ************************************ 00:06:48.554 END TEST accel_crc32c 00:06:48.554 ************************************ 00:06:48.555 02:47:39 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:48.555 02:47:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:48.555 02:47:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.555 02:47:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.555 ************************************ 00:06:48.555 START TEST accel_crc32c_C2 00:06:48.555 ************************************ 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.555 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:48.815 [2024-05-13 02:47:39.371916] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:48.815 [2024-05-13 02:47:39.371995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490905 ] 00:06:48.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.815 [2024-05-13 02:47:39.409719] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.815 [2024-05-13 02:47:39.441190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.815 [2024-05-13 02:47:39.478872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.815 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.816 02:47:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.197 00:06:50.197 real 0m1.287s 00:06:50.197 user 0m1.158s 00:06:50.197 sys 0m0.132s 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.197 02:47:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 ************************************ 00:06:50.197 END TEST accel_crc32c_C2 00:06:50.197 ************************************ 00:06:50.197 02:47:40 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:50.197 02:47:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:50.197 02:47:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.197 02:47:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.197 ************************************ 00:06:50.197 START TEST accel_copy 00:06:50.197 ************************************ 00:06:50.197 02:47:40 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:50.197 02:47:40 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:50.197 02:47:40 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:50.197 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.197 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:50.198 [2024-05-13 02:47:40.738581] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:50.198 [2024-05-13 02:47:40.738665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491158 ] 00:06:50.198 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.198 [2024-05-13 02:47:40.776812] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.198 [2024-05-13 02:47:40.808580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.198 [2024-05-13 02:47:40.846493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.198 02:47:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:51.580 02:47:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.580 00:06:51.580 real 0m1.288s 00:06:51.580 user 0m1.167s 00:06:51.580 sys 0m0.123s 00:06:51.580 02:47:42 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.580 02:47:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.580 ************************************ 00:06:51.580 END TEST accel_copy 00:06:51.580 ************************************ 00:06:51.580 02:47:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.580 02:47:42 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:51.580 02:47:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.580 02:47:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.580 ************************************ 00:06:51.580 START TEST accel_fill 00:06:51.580 ************************************ 00:06:51.580 02:47:42 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:51.580 02:47:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:51.580 [2024-05-13 02:47:42.104719] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:51.580 [2024-05-13 02:47:42.104802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491441 ] 00:06:51.580 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.580 [2024-05-13 02:47:42.143228] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.581 [2024-05-13 02:47:42.174159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.581 [2024-05-13 02:47:42.211817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:51.581 02:47:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.962 02:47:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:52.963 02:47:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.963 00:06:52.963 real 0m1.288s 00:06:52.963 user 0m1.164s 00:06:52.963 sys 0m0.126s 00:06:52.963 02:47:43 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.963 02:47:43 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 ************************************ 00:06:52.963 END TEST accel_fill 00:06:52.963 ************************************ 00:06:52.963 02:47:43 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:52.963 02:47:43 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:52.963 02:47:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.963 02:47:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 ************************************ 00:06:52.963 START TEST accel_copy_crc32c 00:06:52.963 ************************************ 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:52.963 [2024-05-13 02:47:43.472343] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:52.963 [2024-05-13 02:47:43.472422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491723 ] 00:06:52.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.963 [2024-05-13 02:47:43.511690] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.963 [2024-05-13 02:47:43.540876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.963 [2024-05-13 02:47:43.578474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.963 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.964 02:47:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.343 00:06:54.343 real 0m1.289s 00:06:54.343 user 0m1.161s 00:06:54.343 sys 0m0.130s 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.343 02:47:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:54.343 ************************************ 00:06:54.343 END TEST accel_copy_crc32c 00:06:54.343 ************************************ 00:06:54.343 02:47:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.343 02:47:44 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:54.343 02:47:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.343 02:47:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.343 ************************************ 00:06:54.343 START TEST accel_copy_crc32c_C2 00:06:54.343 ************************************ 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:54.343 02:47:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:54.343 [2024-05-13 02:47:44.847298] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:54.343 [2024-05-13 02:47:44.847392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492010 ] 00:06:54.343 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.343 [2024-05-13 02:47:44.887790] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.343 [2024-05-13 02:47:44.918742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.343 [2024-05-13 02:47:44.959338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.343 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.344 02:47:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.722 00:06:55.722 real 0m1.298s 00:06:55.722 user 0m1.172s 00:06:55.722 sys 0m0.129s 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.722 02:47:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:55.722 ************************************ 00:06:55.722 END TEST accel_copy_crc32c_C2 00:06:55.722 ************************************ 00:06:55.722 02:47:46 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:55.722 02:47:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:55.722 02:47:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.722 02:47:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.722 ************************************ 00:06:55.722 START TEST accel_dualcast 00:06:55.722 ************************************ 00:06:55.722 02:47:46 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:55.722 [2024-05-13 02:47:46.228717] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:55.722 [2024-05-13 02:47:46.228792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492289 ] 00:06:55.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.722 [2024-05-13 02:47:46.268160] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.722 [2024-05-13 02:47:46.301461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.722 [2024-05-13 02:47:46.340788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.722 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:55.723 02:47:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.103 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:57.104 02:47:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.104 00:06:57.104 real 0m1.297s 00:06:57.104 user 0m1.167s 00:06:57.104 sys 0m0.133s 00:06:57.104 02:47:47 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.104 02:47:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:57.104 ************************************ 00:06:57.104 END TEST accel_dualcast 00:06:57.104 ************************************ 00:06:57.104 02:47:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:57.104 02:47:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.104 02:47:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.104 02:47:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.104 ************************************ 00:06:57.104 START TEST accel_compare 00:06:57.104 ************************************ 00:06:57.104 02:47:47 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:57.104 [2024-05-13 02:47:47.606474] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:57.104 [2024-05-13 02:47:47.606557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492570 ] 00:06:57.104 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.104 [2024-05-13 02:47:47.644418] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.104 [2024-05-13 02:47:47.675119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.104 [2024-05-13 02:47:47.712738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:57.104 02:47:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.485 02:47:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:58.486 02:47:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.486 00:06:58.486 real 0m1.288s 00:06:58.486 user 0m1.160s 00:06:58.486 sys 0m0.130s 00:06:58.486 02:47:48 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.486 02:47:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:58.486 ************************************ 00:06:58.486 END TEST accel_compare 00:06:58.486 ************************************ 00:06:58.486 02:47:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:58.486 02:47:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:58.486 02:47:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.486 02:47:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.486 ************************************ 00:06:58.486 START TEST accel_xor 00:06:58.486 ************************************ 00:06:58.486 02:47:48 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:58.486 02:47:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:58.486 [2024-05-13 02:47:48.979490] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:58.486 [2024-05-13 02:47:48.979569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492813 ] 00:06:58.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.486 [2024-05-13 02:47:49.018269] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.486 [2024-05-13 02:47:49.050339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.486 [2024-05-13 02:47:49.089242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:58.486 02:47:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.868 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.868 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.868 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.868 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.868 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.869 00:06:59.869 real 0m1.294s 00:06:59.869 user 0m1.163s 00:06:59.869 sys 0m0.133s 00:06:59.869 02:47:50 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.869 02:47:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:59.869 ************************************ 00:06:59.869 END TEST accel_xor 00:06:59.869 ************************************ 00:06:59.869 02:47:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:59.869 02:47:50 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:59.869 02:47:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.869 02:47:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.869 ************************************ 00:06:59.869 START TEST accel_xor 00:06:59.869 ************************************ 00:06:59.869 02:47:50 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:59.869 [2024-05-13 02:47:50.361051] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:06:59.869 [2024-05-13 02:47:50.361134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493018 ] 00:06:59.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.869 [2024-05-13 02:47:50.400544] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.869 [2024-05-13 02:47:50.432404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.869 [2024-05-13 02:47:50.472353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.869 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:59.870 02:47:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:01.252 02:47:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.252 00:07:01.252 real 0m1.296s 00:07:01.252 user 0m1.167s 00:07:01.252 sys 0m0.134s 00:07:01.252 02:47:51 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.252 02:47:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:01.252 ************************************ 00:07:01.252 END TEST accel_xor 00:07:01.252 ************************************ 00:07:01.252 02:47:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:01.252 02:47:51 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:01.252 02:47:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.252 02:47:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.252 ************************************ 00:07:01.252 START TEST accel_dif_verify 00:07:01.252 ************************************ 00:07:01.252 02:47:51 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:01.252 [2024-05-13 02:47:51.734545] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:01.252 [2024-05-13 02:47:51.734638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493209 ] 00:07:01.252 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.252 [2024-05-13 02:47:51.773615] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.252 [2024-05-13 02:47:51.804499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.252 [2024-05-13 02:47:51.842353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.252 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:01.253 02:47:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:02.634 02:47:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.634 00:07:02.634 real 0m1.290s 00:07:02.634 user 0m1.166s 00:07:02.634 sys 0m0.130s 00:07:02.634 02:47:53 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.634 02:47:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 ************************************ 00:07:02.634 END TEST accel_dif_verify 00:07:02.634 ************************************ 00:07:02.634 02:47:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:02.634 02:47:53 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:02.634 02:47:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.634 02:47:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 ************************************ 00:07:02.634 START TEST accel_dif_generate 00:07:02.634 ************************************ 00:07:02.634 02:47:53 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:02.634 [2024-05-13 02:47:53.104018] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:02.634 [2024-05-13 02:47:53.104097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493470 ] 00:07:02.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.634 [2024-05-13 02:47:53.142882] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.634 [2024-05-13 02:47:53.174813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.634 [2024-05-13 02:47:53.212605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.634 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:02.635 02:47:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.571 02:47:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.571 02:47:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.571 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:03.572 02:47:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.572 00:07:03.572 real 0m1.292s 00:07:03.572 user 0m1.161s 00:07:03.572 sys 0m0.136s 00:07:03.572 02:47:54 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.831 02:47:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:03.831 ************************************ 00:07:03.831 END TEST accel_dif_generate 00:07:03.831 ************************************ 00:07:03.831 02:47:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:03.831 02:47:54 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:03.831 02:47:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.831 02:47:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.831 ************************************ 00:07:03.831 START TEST accel_dif_generate_copy 00:07:03.831 ************************************ 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:03.831 [2024-05-13 02:47:54.472308] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:03.831 [2024-05-13 02:47:54.472408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493749 ] 00:07:03.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.831 [2024-05-13 02:47:54.510572] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.831 [2024-05-13 02:47:54.541484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.831 [2024-05-13 02:47:54.579583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.831 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.832 02:47:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.208 00:07:05.208 real 0m1.288s 00:07:05.208 user 0m1.171s 00:07:05.208 sys 0m0.122s 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.208 02:47:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.208 ************************************ 00:07:05.208 END TEST accel_dif_generate_copy 00:07:05.208 ************************************ 00:07:05.208 02:47:55 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:05.208 02:47:55 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.208 02:47:55 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:05.208 02:47:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.208 02:47:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.208 ************************************ 00:07:05.208 START TEST accel_comp 00:07:05.208 ************************************ 00:07:05.208 02:47:55 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:05.208 [2024-05-13 02:47:55.843033] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:05.208 [2024-05-13 02:47:55.843104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494033 ] 00:07:05.208 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.208 [2024-05-13 02:47:55.883092] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.208 [2024-05-13 02:47:55.912853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.208 [2024-05-13 02:47:55.950448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.208 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:05.209 02:47:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:06.582 02:47:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.582 00:07:06.582 real 0m1.293s 00:07:06.582 user 0m1.165s 00:07:06.582 sys 0m0.133s 00:07:06.582 02:47:57 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.582 02:47:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:06.582 ************************************ 00:07:06.582 END TEST accel_comp 00:07:06.582 ************************************ 00:07:06.582 02:47:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:06.582 02:47:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:06.582 02:47:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.582 02:47:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.582 ************************************ 00:07:06.582 START TEST accel_decomp 00:07:06.582 ************************************ 00:07:06.582 02:47:57 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:06.582 [2024-05-13 02:47:57.212711] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:06.582 [2024-05-13 02:47:57.212790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494321 ] 00:07:06.582 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.582 [2024-05-13 02:47:57.251769] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.582 [2024-05-13 02:47:57.283274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.582 [2024-05-13 02:47:57.321146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.582 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:06.583 02:47:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.958 02:47:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.959 02:47:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.959 02:47:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.959 00:07:07.959 real 0m1.292s 00:07:07.959 user 0m1.171s 00:07:07.959 sys 0m0.123s 00:07:07.959 02:47:58 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.959 02:47:58 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:07.959 ************************************ 00:07:07.959 END TEST accel_decomp 00:07:07.959 ************************************ 00:07:07.959 02:47:58 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.959 02:47:58 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:07.959 02:47:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.959 02:47:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.959 ************************************ 00:07:07.959 START TEST accel_decmop_full 00:07:07.959 ************************************ 00:07:07.959 02:47:58 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:07.959 [2024-05-13 02:47:58.587056] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:07.959 [2024-05-13 02:47:58.587139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494600 ] 00:07:07.959 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.959 [2024-05-13 02:47:58.626749] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.959 [2024-05-13 02:47:58.657480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.959 [2024-05-13 02:47:58.695521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.959 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:07.960 02:47:58 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.335 02:47:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.336 02:47:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.336 00:07:09.336 real 0m1.302s 00:07:09.336 user 0m1.173s 00:07:09.336 sys 0m0.130s 00:07:09.336 02:47:59 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.336 02:47:59 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:09.336 ************************************ 00:07:09.336 END TEST accel_decmop_full 00:07:09.336 ************************************ 00:07:09.336 02:47:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.336 02:47:59 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:09.336 02:47:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.336 02:47:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.336 ************************************ 00:07:09.336 START TEST accel_decomp_mcore 00:07:09.336 ************************************ 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:09.336 02:47:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:09.336 [2024-05-13 02:47:59.953652] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:09.336 [2024-05-13 02:47:59.953731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494882 ] 00:07:09.336 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.336 [2024-05-13 02:47:59.993485] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.336 [2024-05-13 02:48:00.028048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.336 [2024-05-13 02:48:00.072975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.336 [2024-05-13 02:48:00.073071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.336 [2024-05-13 02:48:00.073162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.336 [2024-05-13 02:48:00.073164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.336 02:48:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.714 00:07:10.714 real 0m1.322s 00:07:10.714 user 0m4.526s 00:07:10.714 sys 0m0.142s 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.714 02:48:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:10.714 ************************************ 00:07:10.714 END TEST accel_decomp_mcore 00:07:10.714 ************************************ 00:07:10.714 02:48:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.714 02:48:01 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:10.714 02:48:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.714 02:48:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.714 ************************************ 00:07:10.714 START TEST accel_decomp_full_mcore 00:07:10.714 ************************************ 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:10.714 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.715 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.715 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.715 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.715 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.715 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:10.715 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:10.715 [2024-05-13 02:48:01.367639] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:10.715 [2024-05-13 02:48:01.367707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495243 ] 00:07:10.715 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.715 [2024-05-13 02:48:01.407649] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.715 [2024-05-13 02:48:01.437382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.715 [2024-05-13 02:48:01.478973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.715 [2024-05-13 02:48:01.479069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.715 [2024-05-13 02:48:01.479155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.715 [2024-05-13 02:48:01.479157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.975 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.976 02:48:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.913 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.914 00:07:11.914 real 0m1.323s 00:07:11.914 user 0m4.546s 00:07:11.914 sys 0m0.146s 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.914 02:48:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:11.914 ************************************ 00:07:11.914 END TEST accel_decomp_full_mcore 00:07:11.914 ************************************ 00:07:11.914 02:48:02 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:11.914 02:48:02 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:11.914 02:48:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.914 02:48:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.173 ************************************ 00:07:12.173 START TEST accel_decomp_mthread 00:07:12.173 ************************************ 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:12.173 [2024-05-13 02:48:02.784515] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:12.173 [2024-05-13 02:48:02.784593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495487 ] 00:07:12.173 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.173 [2024-05-13 02:48:02.823632] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.173 [2024-05-13 02:48:02.854488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.173 [2024-05-13 02:48:02.892405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.173 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.174 02:48:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.550 00:07:13.550 real 0m1.303s 00:07:13.550 user 0m1.178s 00:07:13.550 sys 0m0.141s 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.550 02:48:04 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:13.550 ************************************ 00:07:13.550 END TEST accel_decomp_mthread 00:07:13.550 ************************************ 00:07:13.550 02:48:04 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.550 02:48:04 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:13.550 02:48:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.550 02:48:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.550 ************************************ 00:07:13.550 START TEST accel_decomp_full_mthread 00:07:13.550 ************************************ 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:13.550 [2024-05-13 02:48:04.167861] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:13.550 [2024-05-13 02:48:04.167913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495835 ] 00:07:13.550 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.550 [2024-05-13 02:48:04.203481] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:13.550 [2024-05-13 02:48:04.235458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.550 [2024-05-13 02:48:04.274548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.550 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 02:48:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.929 00:07:14.929 real 0m1.309s 00:07:14.929 user 0m1.197s 00:07:14.929 sys 0m0.126s 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.929 02:48:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:14.929 ************************************ 00:07:14.929 END TEST accel_decomp_full_mthread 00:07:14.929 ************************************ 00:07:14.929 02:48:05 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:14.929 02:48:05 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:14.929 02:48:05 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:14.929 02:48:05 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:14.929 02:48:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.929 02:48:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.930 02:48:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.930 02:48:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.930 02:48:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.930 02:48:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.930 02:48:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.930 02:48:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:14.930 02:48:05 accel -- accel/accel.sh@41 -- # jq -r . 00:07:14.930 ************************************ 00:07:14.930 START TEST accel_dif_functional_tests 00:07:14.930 ************************************ 00:07:14.930 02:48:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:14.930 [2024-05-13 02:48:05.581461] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:14.930 [2024-05-13 02:48:05.581545] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496333 ] 00:07:14.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.930 [2024-05-13 02:48:05.619111] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.930 [2024-05-13 02:48:05.650942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.930 [2024-05-13 02:48:05.692297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.930 [2024-05-13 02:48:05.692399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.930 [2024-05-13 02:48:05.692400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.189 00:07:15.189 00:07:15.189 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.189 http://cunit.sourceforge.net/ 00:07:15.189 00:07:15.189 00:07:15.189 Suite: accel_dif 00:07:15.189 Test: verify: DIF generated, GUARD check ...passed 00:07:15.189 Test: verify: DIF generated, APPTAG check ...passed 00:07:15.189 Test: verify: DIF generated, REFTAG check ...passed 00:07:15.189 Test: verify: DIF not generated, GUARD check ...[2024-05-13 02:48:05.753561] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:15.189 [2024-05-13 02:48:05.753603] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:15.189 passed 00:07:15.189 Test: verify: DIF not generated, APPTAG check ...[2024-05-13 02:48:05.753636] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:15.189 [2024-05-13 02:48:05.753655] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:15.189 passed 00:07:15.189 Test: verify: DIF not generated, REFTAG check ...[2024-05-13 02:48:05.753676] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:15.189 [2024-05-13 02:48:05.753696] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:15.189 passed 00:07:15.189 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:15.189 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-13 02:48:05.753741] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:15.189 passed 00:07:15.189 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:15.189 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:15.189 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:15.189 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-13 02:48:05.753841] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:15.189 passed 00:07:15.189 Test: generate copy: DIF generated, GUARD check ...passed 00:07:15.189 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:15.189 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:15.189 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:15.189 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:15.189 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:15.189 Test: generate copy: iovecs-len validate ...[2024-05-13 02:48:05.754017] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:15.189 passed 00:07:15.189 Test: generate copy: buffer alignment validate ...passed 00:07:15.189 00:07:15.189 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.189 suites 1 1 n/a 0 0 00:07:15.189 tests 20 20 20 0 0 00:07:15.189 asserts 204 204 204 0 n/a 00:07:15.189 00:07:15.189 Elapsed time = 0.000 seconds 00:07:15.189 00:07:15.189 real 0m0.346s 00:07:15.189 user 0m0.489s 00:07:15.189 sys 0m0.151s 00:07:15.189 02:48:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.189 02:48:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:15.189 ************************************ 00:07:15.189 END TEST accel_dif_functional_tests 00:07:15.189 ************************************ 00:07:15.189 00:07:15.189 real 0m29.842s 00:07:15.189 user 0m32.561s 00:07:15.189 sys 0m4.911s 00:07:15.189 02:48:05 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.189 02:48:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.189 ************************************ 00:07:15.189 END TEST accel 00:07:15.189 ************************************ 00:07:15.189 02:48:05 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:15.189 02:48:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.189 02:48:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.449 02:48:05 -- common/autotest_common.sh@10 -- # set +x 00:07:15.449 ************************************ 00:07:15.449 START TEST accel_rpc 00:07:15.449 ************************************ 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:15.449 * Looking for test storage... 00:07:15.449 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:15.449 02:48:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:15.449 02:48:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3496650 00:07:15.449 02:48:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:15.449 02:48:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3496650 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3496650 ']' 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.449 02:48:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.449 [2024-05-13 02:48:06.160664] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:15.449 [2024-05-13 02:48:06.160746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496650 ] 00:07:15.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.449 [2024-05-13 02:48:06.200859] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.449 [2024-05-13 02:48:06.229564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.709 [2024-05-13 02:48:06.268184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.709 02:48:06 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.709 02:48:06 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.709 02:48:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:15.709 02:48:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:15.709 02:48:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:15.710 02:48:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:15.710 02:48:06 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:15.710 02:48:06 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.710 02:48:06 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.710 02:48:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.710 ************************************ 00:07:15.710 START TEST accel_assign_opcode 00:07:15.710 ************************************ 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.710 [2024-05-13 02:48:06.364792] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.710 [2024-05-13 02:48:06.372802] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.710 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.969 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.970 software 00:07:15.970 00:07:15.970 real 0m0.222s 00:07:15.970 user 0m0.040s 00:07:15.970 sys 0m0.012s 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.970 02:48:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:15.970 ************************************ 00:07:15.970 END TEST accel_assign_opcode 00:07:15.970 ************************************ 00:07:15.970 02:48:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3496650 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3496650 ']' 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3496650 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3496650 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3496650' 00:07:15.970 killing process with pid 3496650 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@965 -- # kill 3496650 00:07:15.970 02:48:06 accel_rpc -- common/autotest_common.sh@970 -- # wait 3496650 00:07:16.229 00:07:16.229 real 0m0.929s 00:07:16.229 user 0m0.821s 00:07:16.229 sys 0m0.463s 00:07:16.229 02:48:06 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.229 02:48:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.229 ************************************ 00:07:16.229 END TEST accel_rpc 00:07:16.229 ************************************ 00:07:16.229 02:48:07 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:16.229 02:48:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.229 02:48:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.229 02:48:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.489 ************************************ 00:07:16.489 START TEST app_cmdline 00:07:16.490 ************************************ 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:16.490 * Looking for test storage... 00:07:16.490 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:16.490 02:48:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:16.490 02:48:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3496798 00:07:16.490 02:48:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3496798 00:07:16.490 02:48:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3496798 ']' 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:16.490 02:48:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.490 [2024-05-13 02:48:07.180668] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:16.490 [2024-05-13 02:48:07.180751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496798 ] 00:07:16.490 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.490 [2024-05-13 02:48:07.216899] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.490 [2024-05-13 02:48:07.249709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.490 [2024-05-13 02:48:07.288612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.749 02:48:07 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.749 02:48:07 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:16.749 02:48:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:17.009 { 00:07:17.009 "version": "SPDK v24.05-pre git sha1 dafdb289f", 00:07:17.009 "fields": { 00:07:17.009 "major": 24, 00:07:17.009 "minor": 5, 00:07:17.009 "patch": 0, 00:07:17.009 "suffix": "-pre", 00:07:17.009 "commit": "dafdb289f" 00:07:17.009 } 00:07:17.009 } 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:17.009 02:48:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.009 02:48:07 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.270 request: 00:07:17.270 { 00:07:17.270 "method": "env_dpdk_get_mem_stats", 00:07:17.270 "req_id": 1 00:07:17.270 } 00:07:17.270 Got JSON-RPC error response 00:07:17.270 response: 00:07:17.270 { 00:07:17.270 "code": -32601, 00:07:17.270 "message": "Method not found" 00:07:17.270 } 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.270 02:48:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3496798 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3496798 ']' 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3496798 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3496798 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3496798' 00:07:17.270 killing process with pid 3496798 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@965 -- # kill 3496798 00:07:17.270 02:48:07 app_cmdline -- common/autotest_common.sh@970 -- # wait 3496798 00:07:17.530 00:07:17.530 real 0m1.142s 00:07:17.530 user 0m1.285s 00:07:17.530 sys 0m0.448s 00:07:17.530 02:48:08 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.530 02:48:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.530 ************************************ 00:07:17.530 END TEST app_cmdline 00:07:17.530 ************************************ 00:07:17.530 02:48:08 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:17.530 02:48:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.530 02:48:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.530 02:48:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.530 ************************************ 00:07:17.530 START TEST version 00:07:17.530 ************************************ 00:07:17.530 02:48:08 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:17.790 * Looking for test storage... 00:07:17.790 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:17.790 02:48:08 version -- app/version.sh@17 -- # get_header_version major 00:07:17.790 02:48:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # cut -f2 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.790 02:48:08 version -- app/version.sh@17 -- # major=24 00:07:17.790 02:48:08 version -- app/version.sh@18 -- # get_header_version minor 00:07:17.790 02:48:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # cut -f2 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.790 02:48:08 version -- app/version.sh@18 -- # minor=5 00:07:17.790 02:48:08 version -- app/version.sh@19 -- # get_header_version patch 00:07:17.790 02:48:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # cut -f2 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.790 02:48:08 version -- app/version.sh@19 -- # patch=0 00:07:17.790 02:48:08 version -- app/version.sh@20 -- # get_header_version suffix 00:07:17.790 02:48:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:17.790 02:48:08 version -- app/version.sh@14 -- # cut -f2 00:07:17.790 02:48:08 version -- app/version.sh@20 -- # suffix=-pre 00:07:17.790 02:48:08 version -- app/version.sh@22 -- # version=24.5 00:07:17.790 02:48:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:17.790 02:48:08 version -- app/version.sh@28 -- # version=24.5rc0 00:07:17.790 02:48:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:17.790 02:48:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:17.790 02:48:08 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:17.790 02:48:08 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:17.790 00:07:17.790 real 0m0.174s 00:07:17.790 user 0m0.082s 00:07:17.790 sys 0m0.137s 00:07:17.790 02:48:08 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.790 02:48:08 version -- common/autotest_common.sh@10 -- # set +x 00:07:17.790 ************************************ 00:07:17.790 END TEST version 00:07:17.790 ************************************ 00:07:17.790 02:48:08 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@194 -- # uname -s 00:07:17.790 02:48:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:17.790 02:48:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:17.790 02:48:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:17.790 02:48:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:17.790 02:48:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.790 02:48:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.790 02:48:08 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:07:17.790 02:48:08 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:07:17.790 02:48:08 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:07:17.790 02:48:08 -- spdk/autotest.sh@369 -- # [[ 1 -eq 1 ]] 00:07:17.790 02:48:08 -- spdk/autotest.sh@370 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:17.790 02:48:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.790 02:48:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.790 02:48:08 -- common/autotest_common.sh@10 -- # set +x 00:07:18.084 ************************************ 00:07:18.084 START TEST llvm_fuzz 00:07:18.084 ************************************ 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:18.085 * Looking for test storage... 00:07:18.085 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:18.085 02:48:08 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.085 02:48:08 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:18.085 ************************************ 00:07:18.085 START TEST nvmf_fuzz 00:07:18.085 ************************************ 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:18.085 * Looking for test storage... 00:07:18.085 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:18.085 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:18.371 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:18.372 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:18.372 #define SPDK_CONFIG_H 00:07:18.372 #define SPDK_CONFIG_APPS 1 00:07:18.372 #define SPDK_CONFIG_ARCH native 00:07:18.372 #undef SPDK_CONFIG_ASAN 00:07:18.372 #undef SPDK_CONFIG_AVAHI 00:07:18.372 #undef SPDK_CONFIG_CET 00:07:18.372 #define SPDK_CONFIG_COVERAGE 1 00:07:18.372 #define SPDK_CONFIG_CROSS_PREFIX 00:07:18.372 #undef SPDK_CONFIG_CRYPTO 00:07:18.372 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:18.372 #undef SPDK_CONFIG_CUSTOMOCF 00:07:18.372 #undef SPDK_CONFIG_DAOS 00:07:18.372 #define SPDK_CONFIG_DAOS_DIR 00:07:18.372 #define SPDK_CONFIG_DEBUG 1 00:07:18.372 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:18.372 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:18.372 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:18.372 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:18.372 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:18.372 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:18.372 #define SPDK_CONFIG_EXAMPLES 1 00:07:18.372 #undef SPDK_CONFIG_FC 00:07:18.372 #define SPDK_CONFIG_FC_PATH 00:07:18.372 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:18.372 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:18.372 #undef SPDK_CONFIG_FUSE 00:07:18.372 #define SPDK_CONFIG_FUZZER 1 00:07:18.372 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:18.372 #undef SPDK_CONFIG_GOLANG 00:07:18.372 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:18.372 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:18.372 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:18.372 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:18.372 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:18.372 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:18.372 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:18.372 #define SPDK_CONFIG_IDXD 1 00:07:18.372 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:18.372 #undef SPDK_CONFIG_IPSEC_MB 00:07:18.372 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:18.372 #define SPDK_CONFIG_ISAL 1 00:07:18.372 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:18.372 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:18.372 #define SPDK_CONFIG_LIBDIR 00:07:18.372 #undef SPDK_CONFIG_LTO 00:07:18.372 #define SPDK_CONFIG_MAX_LCORES 00:07:18.372 #define SPDK_CONFIG_NVME_CUSE 1 00:07:18.372 #undef SPDK_CONFIG_OCF 00:07:18.372 #define SPDK_CONFIG_OCF_PATH 00:07:18.372 #define SPDK_CONFIG_OPENSSL_PATH 00:07:18.372 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:18.372 #define SPDK_CONFIG_PGO_DIR 00:07:18.372 #undef SPDK_CONFIG_PGO_USE 00:07:18.372 #define SPDK_CONFIG_PREFIX /usr/local 00:07:18.372 #undef SPDK_CONFIG_RAID5F 00:07:18.372 #undef SPDK_CONFIG_RBD 00:07:18.372 #define SPDK_CONFIG_RDMA 1 00:07:18.372 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:18.372 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:18.372 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:18.372 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:18.372 #undef SPDK_CONFIG_SHARED 00:07:18.372 #undef SPDK_CONFIG_SMA 00:07:18.373 #define SPDK_CONFIG_TESTS 1 00:07:18.373 #undef SPDK_CONFIG_TSAN 00:07:18.373 #define SPDK_CONFIG_UBLK 1 00:07:18.373 #define SPDK_CONFIG_UBSAN 1 00:07:18.373 #undef SPDK_CONFIG_UNIT_TESTS 00:07:18.373 #undef SPDK_CONFIG_URING 00:07:18.373 #define SPDK_CONFIG_URING_PATH 00:07:18.373 #undef SPDK_CONFIG_URING_ZNS 00:07:18.373 #undef SPDK_CONFIG_USDT 00:07:18.373 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:18.373 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:18.373 #define SPDK_CONFIG_VFIO_USER 1 00:07:18.373 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:18.373 #define SPDK_CONFIG_VHOST 1 00:07:18.373 #define SPDK_CONFIG_VIRTIO 1 00:07:18.373 #undef SPDK_CONFIG_VTUNE 00:07:18.373 #define SPDK_CONFIG_VTUNE_DIR 00:07:18.373 #define SPDK_CONFIG_WERROR 1 00:07:18.373 #define SPDK_CONFIG_WPDK_DIR 00:07:18.373 #undef SPDK_CONFIG_XNVME 00:07:18.373 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 1 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:18.373 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : main 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:18.374 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3497290 ]] 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 3497290 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:18.375 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.evZyVX 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.evZyVX/tests/nvmf /tmp/spdk.evZyVX 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:07:18.376 02:48:08 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=976003072 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4308426752 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52109942784 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742305280 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=9632362496 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866440192 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871150592 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342489088 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5971968 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870355968 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871154688 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=798720 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:18.376 * Looking for test storage... 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=52109942784 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=11846955008 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:18.376 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:18.376 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.377 02:48:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:18.377 [2024-05-13 02:48:09.086096] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:18.377 [2024-05-13 02:48:09.086161] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497441 ] 00:07:18.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.637 [2024-05-13 02:48:09.303542] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.637 [2024-05-13 02:48:09.341158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.637 [2024-05-13 02:48:09.372174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.637 [2024-05-13 02:48:09.424441] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.637 [2024-05-13 02:48:09.440402] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:18.637 [2024-05-13 02:48:09.440826] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:18.896 INFO: Running with entropic power schedule (0xFF, 100). 00:07:18.896 INFO: Seed: 1083169073 00:07:18.896 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:18.896 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:18.896 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:18.896 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.896 #2 INITED exec/s: 0 rss: 63Mb 00:07:18.896 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.896 This may also happen if the target rejected all inputs we tried so far 00:07:18.896 [2024-05-13 02:48:09.489762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:18.896 [2024-05-13 02:48:09.489789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.156 NEW_FUNC[1/684]: 0x4a3c60 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:19.156 NEW_FUNC[2/684]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.156 #5 NEW cov: 11740 ft: 11741 corp: 2/110b lim: 320 exec/s: 0 rss: 70Mb L: 109/109 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:19.156 [2024-05-13 02:48:09.800550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.156 [2024-05-13 02:48:09.800591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.156 #6 NEW cov: 11870 ft: 12337 corp: 3/219b lim: 320 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 ChangeBinInt- 00:07:19.156 [2024-05-13 02:48:09.850632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.156 [2024-05-13 02:48:09.850657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.156 #7 NEW cov: 11876 ft: 12654 corp: 4/328b lim: 320 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 CrossOver- 00:07:19.156 [2024-05-13 02:48:09.890712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.156 [2024-05-13 02:48:09.890737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.156 #13 NEW cov: 11961 ft: 12942 corp: 5/437b lim: 320 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 CrossOver- 00:07:19.156 [2024-05-13 02:48:09.930887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.156 [2024-05-13 02:48:09.930911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.156 #19 NEW cov: 11961 ft: 13060 corp: 6/546b lim: 320 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 CrossOver- 00:07:19.416 [2024-05-13 02:48:09.970934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:09.970959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 #20 NEW cov: 11961 ft: 13129 corp: 7/655b lim: 320 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 ChangeByte- 00:07:19.416 [2024-05-13 02:48:10.001169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.001196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 [2024-05-13 02:48:10.001255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.001269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.416 #21 NEW cov: 11961 ft: 13385 corp: 8/832b lim: 320 exec/s: 0 rss: 70Mb L: 177/177 MS: 1 CopyPart- 00:07:19.416 [2024-05-13 02:48:10.041467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.041496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 [2024-05-13 02:48:10.041554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.041567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.416 [2024-05-13 02:48:10.041628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:6 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.041641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.416 #22 NEW cov: 11961 ft: 13611 corp: 9/1034b lim: 320 exec/s: 0 rss: 70Mb L: 202/202 MS: 1 CopyPart- 00:07:19.416 [2024-05-13 02:48:10.081267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.081296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 #23 NEW cov: 11961 ft: 13649 corp: 10/1143b lim: 320 exec/s: 0 rss: 70Mb L: 109/202 MS: 1 ShuffleBytes- 00:07:19.416 [2024-05-13 02:48:10.121393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a09c cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.121419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 #24 NEW cov: 11961 ft: 13694 corp: 11/1213b lim: 320 exec/s: 0 rss: 70Mb L: 70/202 MS: 1 EraseBytes- 00:07:19.416 [2024-05-13 02:48:10.161461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.161487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 #25 NEW cov: 11961 ft: 13737 corp: 12/1322b lim: 320 exec/s: 0 rss: 70Mb L: 109/202 MS: 1 CrossOver- 00:07:19.416 [2024-05-13 02:48:10.201860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.201885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.416 [2024-05-13 02:48:10.201943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a026 00:07:19.416 [2024-05-13 02:48:10.201956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.416 [2024-05-13 02:48:10.202016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:6 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.416 [2024-05-13 02:48:10.202029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.676 #26 NEW cov: 11961 ft: 13763 corp: 13/1525b lim: 320 exec/s: 0 rss: 70Mb L: 203/203 MS: 1 InsertByte- 00:07:19.676 [2024-05-13 02:48:10.251956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.251981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.676 [2024-05-13 02:48:10.252039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.252053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.676 [2024-05-13 02:48:10.252112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:6 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.252125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.676 #27 NEW cov: 11961 ft: 13780 corp: 14/1727b lim: 320 exec/s: 0 rss: 70Mb L: 202/203 MS: 1 ChangeByte- 00:07:19.676 [2024-05-13 02:48:10.291809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.291834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.676 #28 NEW cov: 11961 ft: 13790 corp: 15/1824b lim: 320 exec/s: 0 rss: 70Mb L: 97/203 MS: 1 EraseBytes- 00:07:19.676 [2024-05-13 02:48:10.331948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.331973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.676 #29 NEW cov: 11961 ft: 13804 corp: 16/1900b lim: 320 exec/s: 0 rss: 70Mb L: 76/203 MS: 1 EraseBytes- 00:07:19.676 [2024-05-13 02:48:10.372097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.372123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.676 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.676 #30 NEW cov: 11984 ft: 13867 corp: 17/2009b lim: 320 exec/s: 0 rss: 70Mb L: 109/203 MS: 1 CopyPart- 00:07:19.676 [2024-05-13 02:48:10.412174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.412200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.676 #31 NEW cov: 11984 ft: 13899 corp: 18/2118b lim: 320 exec/s: 0 rss: 70Mb L: 109/203 MS: 1 ChangeByte- 00:07:19.676 [2024-05-13 02:48:10.452297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.676 [2024-05-13 02:48:10.452322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 #32 NEW cov: 11984 ft: 13901 corp: 19/2194b lim: 320 exec/s: 32 rss: 70Mb L: 76/203 MS: 1 ChangeBit- 00:07:19.937 [2024-05-13 02:48:10.492487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.492512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 #33 NEW cov: 11984 ft: 13917 corp: 20/2271b lim: 320 exec/s: 33 rss: 70Mb L: 77/203 MS: 1 InsertByte- 00:07:19.937 [2024-05-13 02:48:10.532598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.532624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 #34 NEW cov: 11984 ft: 13924 corp: 21/2380b lim: 320 exec/s: 34 rss: 70Mb L: 109/203 MS: 1 CrossOver- 00:07:19.937 [2024-05-13 02:48:10.572711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.572736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 #35 NEW cov: 11984 ft: 13958 corp: 22/2456b lim: 320 exec/s: 35 rss: 70Mb L: 76/203 MS: 1 ChangeByte- 00:07:19.937 [2024-05-13 02:48:10.612927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.612952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 [2024-05-13 02:48:10.613027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.613041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.937 #36 NEW cov: 11984 ft: 13975 corp: 23/2646b lim: 320 exec/s: 36 rss: 70Mb L: 190/203 MS: 1 EraseBytes- 00:07:19.937 [2024-05-13 02:48:10.652933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.652958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 #37 NEW cov: 11984 ft: 13998 corp: 24/2755b lim: 320 exec/s: 37 rss: 70Mb L: 109/203 MS: 1 ChangeByte- 00:07:19.937 [2024-05-13 02:48:10.693015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.693040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 #38 NEW cov: 11984 ft: 14044 corp: 25/2852b lim: 320 exec/s: 38 rss: 71Mb L: 97/203 MS: 1 ChangeBinInt- 00:07:19.937 [2024-05-13 02:48:10.733296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:19.937 [2024-05-13 02:48:10.733324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.937 [2024-05-13 02:48:10.733385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:ecececec cdw11:ecececec 00:07:19.937 [2024-05-13 02:48:10.733399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.206 #39 NEW cov: 11984 ft: 14080 corp: 26/3043b lim: 320 exec/s: 39 rss: 71Mb L: 191/203 MS: 1 InsertRepeatedBytes- 00:07:20.206 [2024-05-13 02:48:10.783303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0ffa0a0 cdw11:a0a0a0a0 00:07:20.206 [2024-05-13 02:48:10.783328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.206 #40 NEW cov: 11984 ft: 14094 corp: 27/3120b lim: 320 exec/s: 40 rss: 71Mb L: 77/203 MS: 1 InsertByte- 00:07:20.206 [2024-05-13 02:48:10.823430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.206 [2024-05-13 02:48:10.823471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.207 #41 NEW cov: 11984 ft: 14104 corp: 28/3229b lim: 320 exec/s: 41 rss: 71Mb L: 109/203 MS: 1 CrossOver- 00:07:20.207 [2024-05-13 02:48:10.863525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.207 [2024-05-13 02:48:10.863551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.207 #42 NEW cov: 11984 ft: 14115 corp: 29/3305b lim: 320 exec/s: 42 rss: 71Mb L: 76/203 MS: 1 ChangeBit- 00:07:20.207 [2024-05-13 02:48:10.904004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.207 [2024-05-13 02:48:10.904029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.207 [2024-05-13 02:48:10.904087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:8282a0a0 cdw11:82828282 00:07:20.207 [2024-05-13 02:48:10.904101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.207 [2024-05-13 02:48:10.904160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:82828282 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8282828282828282 00:07:20.207 [2024-05-13 02:48:10.904173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.207 [2024-05-13 02:48:10.904229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:7 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.207 [2024-05-13 02:48:10.904242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.207 NEW_FUNC[1/1]: 0x174a060 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:07:20.207 #43 NEW cov: 12005 ft: 14339 corp: 30/3562b lim: 320 exec/s: 43 rss: 71Mb L: 257/257 MS: 1 InsertRepeatedBytes- 00:07:20.207 [2024-05-13 02:48:10.943759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.207 [2024-05-13 02:48:10.943784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.207 #44 NEW cov: 12005 ft: 14361 corp: 31/3671b lim: 320 exec/s: 44 rss: 71Mb L: 109/257 MS: 1 ChangeBinInt- 00:07:20.207 [2024-05-13 02:48:10.983886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.207 [2024-05-13 02:48:10.983911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.207 #45 NEW cov: 12005 ft: 14366 corp: 32/3735b lim: 320 exec/s: 45 rss: 71Mb L: 64/257 MS: 1 EraseBytes- 00:07:20.465 [2024-05-13 02:48:11.013981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a023 00:07:20.465 [2024-05-13 02:48:11.014007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 #46 NEW cov: 12005 ft: 14371 corp: 33/3812b lim: 320 exec/s: 46 rss: 71Mb L: 77/257 MS: 1 InsertByte- 00:07:20.465 [2024-05-13 02:48:11.054094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.054119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 #47 NEW cov: 12005 ft: 14380 corp: 34/3921b lim: 320 exec/s: 47 rss: 71Mb L: 109/257 MS: 1 ShuffleBytes- 00:07:20.465 [2024-05-13 02:48:11.094562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.094587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 [2024-05-13 02:48:11.094644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:8282a0a0 cdw11:82828282 00:07:20.465 [2024-05-13 02:48:11.094658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.465 [2024-05-13 02:48:11.094714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:82828282 cdw10:a0a0a0a0 cdw11:a0a0a0a0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8282828282828282 00:07:20.465 [2024-05-13 02:48:11.094728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.465 [2024-05-13 02:48:11.094782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:7 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.094794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.465 #48 NEW cov: 12005 ft: 14402 corp: 35/4178b lim: 320 exec/s: 48 rss: 71Mb L: 257/257 MS: 1 ChangeBinInt- 00:07:20.465 [2024-05-13 02:48:11.134349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.134374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 #49 NEW cov: 12005 ft: 14438 corp: 36/4254b lim: 320 exec/s: 49 rss: 71Mb L: 76/257 MS: 1 CrossOver- 00:07:20.465 [2024-05-13 02:48:11.174467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.174491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 #50 NEW cov: 12005 ft: 14450 corp: 37/4363b lim: 320 exec/s: 50 rss: 72Mb L: 109/257 MS: 1 ShuffleBytes- 00:07:20.465 [2024-05-13 02:48:11.214558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.214583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 #51 NEW cov: 12005 ft: 14451 corp: 38/4460b lim: 320 exec/s: 51 rss: 72Mb L: 97/257 MS: 1 ShuffleBytes- 00:07:20.465 [2024-05-13 02:48:11.254916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.254940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.465 [2024-05-13 02:48:11.254995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.255012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.465 [2024-05-13 02:48:11.255068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:6 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.465 [2024-05-13 02:48:11.255081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.725 #52 NEW cov: 12005 ft: 14487 corp: 39/4662b lim: 320 exec/s: 52 rss: 72Mb L: 202/257 MS: 1 ChangeBit- 00:07:20.725 [2024-05-13 02:48:11.294802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.725 [2024-05-13 02:48:11.294826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.725 #53 NEW cov: 12005 ft: 14538 corp: 40/4771b lim: 320 exec/s: 53 rss: 72Mb L: 109/257 MS: 1 ChangeByte- 00:07:20.725 [2024-05-13 02:48:11.335270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.725 [2024-05-13 02:48:11.335295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.725 [2024-05-13 02:48:11.335354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:5 nsid:a0a0a0a0 cdw10:54545454 cdw11:54545454 00:07:20.725 [2024-05-13 02:48:11.335367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.725 [2024-05-13 02:48:11.335423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (54) qid:0 cid:6 nsid:54545454 cdw10:54545454 cdw11:a0a05454 00:07:20.725 [2024-05-13 02:48:11.335437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.725 [2024-05-13 02:48:11.335492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:7 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.725 [2024-05-13 02:48:11.335505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.725 #54 NEW cov: 12005 ft: 14572 corp: 41/5073b lim: 320 exec/s: 54 rss: 72Mb L: 302/302 MS: 1 InsertRepeatedBytes- 00:07:20.725 [2024-05-13 02:48:11.375033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.725 [2024-05-13 02:48:11.375058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.725 #55 NEW cov: 12005 ft: 14575 corp: 42/5182b lim: 320 exec/s: 55 rss: 72Mb L: 109/302 MS: 1 ChangeBinInt- 00:07:20.725 [2024-05-13 02:48:11.415196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.725 [2024-05-13 02:48:11.415221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.725 #56 NEW cov: 12005 ft: 14580 corp: 43/5258b lim: 320 exec/s: 56 rss: 72Mb L: 76/302 MS: 1 ChangeBit- 00:07:20.725 [2024-05-13 02:48:11.445239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a0) qid:0 cid:4 nsid:a0a0a0a0 cdw10:a0a0a0a0 cdw11:a0a0a0a0 00:07:20.725 [2024-05-13 02:48:11.445264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.725 #57 NEW cov: 12005 ft: 14584 corp: 44/5367b lim: 320 exec/s: 28 rss: 72Mb L: 109/302 MS: 1 CopyPart- 00:07:20.725 #57 DONE cov: 12005 ft: 14584 corp: 44/5367b lim: 320 exec/s: 28 rss: 72Mb 00:07:20.725 Done 57 runs in 2 second(s) 00:07:20.725 [2024-05-13 02:48:11.472696] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:20.985 02:48:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:20.985 [2024-05-13 02:48:11.635874] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:20.985 [2024-05-13 02:48:11.635971] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497748 ] 00:07:20.985 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.244 [2024-05-13 02:48:11.854299] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.244 [2024-05-13 02:48:11.892704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.244 [2024-05-13 02:48:11.922509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.244 [2024-05-13 02:48:11.974788] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.244 [2024-05-13 02:48:11.990747] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:21.244 [2024-05-13 02:48:11.991166] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:21.244 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.244 INFO: Seed: 3634166983 00:07:21.245 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:21.245 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:21.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:21.245 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.245 #2 INITED exec/s: 0 rss: 63Mb 00:07:21.245 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.245 This may also happen if the target rejected all inputs we tried so far 00:07:21.504 [2024-05-13 02:48:12.057457] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:07:21.504 [2024-05-13 02:48:12.057916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.504 [2024-05-13 02:48:12.057959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.763 NEW_FUNC[1/686]: 0x4a4560 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:21.763 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:21.763 #4 NEW cov: 11823 ft: 11838 corp: 2/8b lim: 30 exec/s: 0 rss: 70Mb L: 7/7 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:21.763 [2024-05-13 02:48:12.387710] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:21.763 [2024-05-13 02:48:12.388225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.763 [2024-05-13 02:48:12.388270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.763 [2024-05-13 02:48:12.388401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.763 [2024-05-13 02:48:12.388421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.763 #6 NEW cov: 12008 ft: 12961 corp: 3/22b lim: 30 exec/s: 0 rss: 70Mb L: 14/14 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:21.763 [2024-05-13 02:48:12.447786] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:21.764 [2024-05-13 02:48:12.448300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.448333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.764 [2024-05-13 02:48:12.448457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.448478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.764 #7 NEW cov: 12014 ft: 13164 corp: 4/36b lim: 30 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 ShuffleBytes- 00:07:21.764 [2024-05-13 02:48:12.497808] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:21.764 [2024-05-13 02:48:12.497967] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:21.764 [2024-05-13 02:48:12.498125] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:21.764 [2024-05-13 02:48:12.498281] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:21.764 [2024-05-13 02:48:12.498444] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2a0a 00:07:21.764 [2024-05-13 02:48:12.498797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.498832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.764 [2024-05-13 02:48:12.498968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.498990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.764 [2024-05-13 02:48:12.499120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.499142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.764 [2024-05-13 02:48:12.499273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.499292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.764 [2024-05-13 02:48:12.499423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.499442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.764 #8 NEW cov: 12099 ft: 14059 corp: 5/66b lim: 30 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:21.764 [2024-05-13 02:48:12.537655] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:07:21.764 [2024-05-13 02:48:12.538030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffc983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.764 [2024-05-13 02:48:12.538064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.764 #9 NEW cov: 12099 ft: 14251 corp: 6/73b lim: 30 exec/s: 0 rss: 70Mb L: 7/30 MS: 1 ChangeByte- 00:07:22.024 [2024-05-13 02:48:12.588524] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.588713] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.588872] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.589029] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.589202] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2a7a 00:07:22.024 [2024-05-13 02:48:12.589572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.589603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.589728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.589748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.589864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.589882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.590014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.590034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.590161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.590181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.024 #10 NEW cov: 12099 ft: 14340 corp: 7/103b lim: 30 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeByte- 00:07:22.024 [2024-05-13 02:48:12.648477] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.648659] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.649030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.649061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.649188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.649206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.024 #11 NEW cov: 12099 ft: 14418 corp: 8/117b lim: 30 exec/s: 0 rss: 70Mb L: 14/30 MS: 1 ChangeByte- 00:07:22.024 [2024-05-13 02:48:12.688979] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.689162] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.689342] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.689518] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.689683] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2a7a 00:07:22.024 [2024-05-13 02:48:12.690018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.690049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.690173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.690194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.690329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.690350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.690470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.690488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.690611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.690631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.024 #12 NEW cov: 12099 ft: 14462 corp: 9/147b lim: 30 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeBit- 00:07:22.024 [2024-05-13 02:48:12.748881] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c9ff 00:07:22.024 [2024-05-13 02:48:12.749215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffc983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.749247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.024 #13 NEW cov: 12099 ft: 14497 corp: 10/154b lim: 30 exec/s: 0 rss: 70Mb L: 7/30 MS: 1 CopyPart- 00:07:22.024 [2024-05-13 02:48:12.799013] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261160) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.799186] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:22.024 [2024-05-13 02:48:12.799575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.799613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.024 [2024-05-13 02:48:12.799739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.024 [2024-05-13 02:48:12.799760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.284 #14 NEW cov: 12099 ft: 14531 corp: 11/168b lim: 30 exec/s: 0 rss: 70Mb L: 14/30 MS: 1 ChangeBinInt- 00:07:22.284 [2024-05-13 02:48:12.848731] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xb3 00:07:22.284 [2024-05-13 02:48:12.848902] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:22.284 [2024-05-13 02:48:12.849265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.284 [2024-05-13 02:48:12.849295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.284 [2024-05-13 02:48:12.849423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.284 [2024-05-13 02:48:12.849443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.284 #15 NEW cov: 12099 ft: 14610 corp: 12/182b lim: 30 exec/s: 0 rss: 70Mb L: 14/30 MS: 1 ChangeByte- 00:07:22.284 [2024-05-13 02:48:12.909313] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c9ff 00:07:22.284 [2024-05-13 02:48:12.909678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:27c983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.284 [2024-05-13 02:48:12.909709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.284 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:22.284 #16 NEW cov: 12122 ft: 14737 corp: 13/189b lim: 30 exec/s: 0 rss: 70Mb L: 7/30 MS: 1 ChangeByte- 00:07:22.284 [2024-05-13 02:48:12.959077] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261160) > buf size (4096) 00:07:22.284 [2024-05-13 02:48:12.959237] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:22.284 [2024-05-13 02:48:12.959616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.284 [2024-05-13 02:48:12.959647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.284 [2024-05-13 02:48:12.959763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.284 [2024-05-13 02:48:12.959782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.284 #17 NEW cov: 12122 ft: 14754 corp: 14/203b lim: 30 exec/s: 0 rss: 70Mb L: 14/30 MS: 1 ChangeBinInt- 00:07:22.285 [2024-05-13 02:48:12.999071] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:07:22.285 [2024-05-13 02:48:12.999416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:010083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.285 [2024-05-13 02:48:12.999447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.285 #18 NEW cov: 12122 ft: 14794 corp: 15/210b lim: 30 exec/s: 0 rss: 70Mb L: 7/30 MS: 1 CMP- DE: "\001\000"- 00:07:22.285 [2024-05-13 02:48:13.049831] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:22.285 [2024-05-13 02:48:13.050337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.285 [2024-05-13 02:48:13.050369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.285 [2024-05-13 02:48:13.050502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.285 [2024-05-13 02:48:13.050520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.285 #19 NEW cov: 12122 ft: 14802 corp: 16/224b lim: 30 exec/s: 19 rss: 70Mb L: 14/30 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:22.544 [2024-05-13 02:48:13.089816] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (526336) > buf size (4096) 00:07:22.544 [2024-05-13 02:48:13.090003] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:22.544 [2024-05-13 02:48:13.090351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01ff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.090385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.544 [2024-05-13 02:48:13.090514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.090533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.544 #21 NEW cov: 12122 ft: 14818 corp: 17/240b lim: 30 exec/s: 21 rss: 71Mb L: 16/30 MS: 2 EraseBytes-CrossOver- 00:07:22.544 [2024-05-13 02:48:13.150008] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:07:22.544 [2024-05-13 02:48:13.150405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.150436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.544 #22 NEW cov: 12122 ft: 14839 corp: 18/247b lim: 30 exec/s: 22 rss: 71Mb L: 7/30 MS: 1 CrossOver- 00:07:22.544 [2024-05-13 02:48:13.200248] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (258048) > buf size (4096) 00:07:22.544 [2024-05-13 02:48:13.200794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fbff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.200829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.544 [2024-05-13 02:48:13.200955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.200973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.544 #23 NEW cov: 12122 ft: 14882 corp: 19/261b lim: 30 exec/s: 23 rss: 71Mb L: 14/30 MS: 1 ChangeBinInt- 00:07:22.544 [2024-05-13 02:48:13.250353] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261160) > buf size (4096) 00:07:22.544 [2024-05-13 02:48:13.250532] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:22.544 [2024-05-13 02:48:13.250884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.250915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.544 [2024-05-13 02:48:13.251043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.251067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.544 #24 NEW cov: 12122 ft: 14913 corp: 20/275b lim: 30 exec/s: 24 rss: 71Mb L: 14/30 MS: 1 CrossOver- 00:07:22.544 [2024-05-13 02:48:13.310616] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:22.544 [2024-05-13 02:48:13.310784] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000c9ff 00:07:22.544 [2024-05-13 02:48:13.311144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffc981ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.311177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.544 [2024-05-13 02:48:13.311309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:c9ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.544 [2024-05-13 02:48:13.311327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.544 #25 NEW cov: 12122 ft: 14970 corp: 21/288b lim: 30 exec/s: 25 rss: 71Mb L: 13/30 MS: 1 CopyPart- 00:07:22.805 [2024-05-13 02:48:13.360725] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:22.805 [2024-05-13 02:48:13.361071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008127 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.361103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.805 #26 NEW cov: 12122 ft: 14986 corp: 22/297b lim: 30 exec/s: 26 rss: 71Mb L: 9/30 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:22.805 [2024-05-13 02:48:13.420821] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:07:22.805 [2024-05-13 02:48:13.421159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.421191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.805 #27 NEW cov: 12122 ft: 15016 corp: 23/304b lim: 30 exec/s: 27 rss: 71Mb L: 7/30 MS: 1 ShuffleBytes- 00:07:22.805 [2024-05-13 02:48:13.481255] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:22.805 [2024-05-13 02:48:13.481579] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.805 [2024-05-13 02:48:13.481743] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (260772) > buf size (4096) 00:07:22.805 [2024-05-13 02:48:13.482115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.482148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.805 [2024-05-13 02:48:13.482281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.482300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.805 [2024-05-13 02:48:13.482436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.482455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.805 [2024-05-13 02:48:13.482574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:fea800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.482597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.805 #28 NEW cov: 12122 ft: 15050 corp: 24/331b lim: 30 exec/s: 28 rss: 71Mb L: 27/30 MS: 1 CrossOver- 00:07:22.805 [2024-05-13 02:48:13.541188] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261160) > buf size (4096) 00:07:22.805 [2024-05-13 02:48:13.541555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.541587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.805 #29 NEW cov: 12122 ft: 15086 corp: 25/338b lim: 30 exec/s: 29 rss: 71Mb L: 7/30 MS: 1 EraseBytes- 00:07:22.805 [2024-05-13 02:48:13.601718] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:07:22.805 [2024-05-13 02:48:13.602186] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:22.805 [2024-05-13 02:48:13.602541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.602572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.805 [2024-05-13 02:48:13.602702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.602721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.805 [2024-05-13 02:48:13.602844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.602864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.805 [2024-05-13 02:48:13.602984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.805 [2024-05-13 02:48:13.603004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.065 #30 NEW cov: 12122 ft: 15175 corp: 26/364b lim: 30 exec/s: 30 rss: 71Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:07:23.065 [2024-05-13 02:48:13.651759] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xb3 00:07:23.065 [2024-05-13 02:48:13.651946] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (24672) > len (4) 00:07:23.065 [2024-05-13 02:48:13.652106] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (98692) > buf size (4096) 00:07:23.065 [2024-05-13 02:48:13.652265] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (885124) > buf size (4096) 00:07:23.065 [2024-05-13 02:48:13.652622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.652653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.065 [2024-05-13 02:48:13.652794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.652817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.065 [2024-05-13 02:48:13.652948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:60600060 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.652968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.065 [2024-05-13 02:48:13.653107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:60608360 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.653131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.065 #31 NEW cov: 12128 ft: 15222 corp: 27/390b lim: 30 exec/s: 31 rss: 71Mb L: 26/30 MS: 1 InsertRepeatedBytes- 00:07:23.065 [2024-05-13 02:48:13.711744] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1044480) > buf size (4096) 00:07:23.065 [2024-05-13 02:48:13.712251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fbff83fb cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.712282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.065 [2024-05-13 02:48:13.712405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.712424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.065 #32 NEW cov: 12128 ft: 15245 corp: 28/407b lim: 30 exec/s: 32 rss: 72Mb L: 17/30 MS: 1 CopyPart- 00:07:23.065 [2024-05-13 02:48:13.771947] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261160) > buf size (4096) 00:07:23.065 [2024-05-13 02:48:13.772113] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f9ff 00:07:23.065 [2024-05-13 02:48:13.772485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff090001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.772516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.065 [2024-05-13 02:48:13.772637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.772655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.065 #33 NEW cov: 12128 ft: 15275 corp: 29/422b lim: 30 exec/s: 33 rss: 72Mb L: 15/30 MS: 1 InsertByte- 00:07:23.065 [2024-05-13 02:48:13.812020] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (787460) > buf size (4096) 00:07:23.065 [2024-05-13 02:48:13.812377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.812414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.065 #34 NEW cov: 12128 ft: 15280 corp: 30/433b lim: 30 exec/s: 34 rss: 72Mb L: 11/30 MS: 1 CrossOver- 00:07:23.065 [2024-05-13 02:48:13.851664] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff01 00:07:23.065 [2024-05-13 02:48:13.852036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffc983ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.065 [2024-05-13 02:48:13.852067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.325 #35 NEW cov: 12128 ft: 15303 corp: 31/442b lim: 30 exec/s: 35 rss: 72Mb L: 9/30 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:23.325 [2024-05-13 02:48:13.902423] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (526336) > buf size (4096) 00:07:23.325 [2024-05-13 02:48:13.902595] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172708) > buf size (4096) 00:07:23.325 [2024-05-13 02:48:13.902953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01ff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:13.902983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.325 [2024-05-13 02:48:13.903108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a8a800a8 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:13.903131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.325 #36 NEW cov: 12128 ft: 15307 corp: 32/458b lim: 30 exec/s: 36 rss: 72Mb L: 16/30 MS: 1 CMP- DE: "\010\000\000\000"- 00:07:23.325 [2024-05-13 02:48:13.962518] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff2a 00:07:23.325 [2024-05-13 02:48:13.962857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:13.962889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.325 #37 NEW cov: 12128 ft: 15311 corp: 33/465b lim: 30 exec/s: 37 rss: 72Mb L: 7/30 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:23.325 [2024-05-13 02:48:14.002540] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff2a 00:07:23.325 [2024-05-13 02:48:14.002702] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:23.325 [2024-05-13 02:48:14.003059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:14.003091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.325 [2024-05-13 02:48:14.003226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:14.003241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.325 #38 NEW cov: 12128 ft: 15317 corp: 34/479b lim: 30 exec/s: 38 rss: 72Mb L: 14/30 MS: 1 InsertRepeatedBytes- 00:07:23.325 [2024-05-13 02:48:14.062914] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:07:23.325 [2024-05-13 02:48:14.063085] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:23.325 [2024-05-13 02:48:14.063452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:010083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:14.063485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.325 [2024-05-13 02:48:14.063616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.325 [2024-05-13 02:48:14.063637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.325 #39 NEW cov: 12128 ft: 15339 corp: 35/494b lim: 30 exec/s: 19 rss: 72Mb L: 15/30 MS: 1 InsertRepeatedBytes- 00:07:23.325 #39 DONE cov: 12128 ft: 15339 corp: 35/494b lim: 30 exec/s: 19 rss: 72Mb 00:07:23.325 ###### Recommended dictionary. ###### 00:07:23.325 "\001\000" # Uses: 4 00:07:23.325 "\010\000\000\000" # Uses: 0 00:07:23.325 ###### End of recommended dictionary. ###### 00:07:23.325 Done 39 runs in 2 second(s) 00:07:23.325 [2024-05-13 02:48:14.083933] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.585 02:48:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:23.585 [2024-05-13 02:48:14.244938] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:23.585 [2024-05-13 02:48:14.245026] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498279 ] 00:07:23.585 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.844 [2024-05-13 02:48:14.456892] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.844 [2024-05-13 02:48:14.494935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.844 [2024-05-13 02:48:14.524487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.844 [2024-05-13 02:48:14.576634] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.844 [2024-05-13 02:48:14.592591] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:23.844 [2024-05-13 02:48:14.592986] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:23.844 INFO: Running with entropic power schedule (0xFF, 100). 00:07:23.844 INFO: Seed: 1942217405 00:07:23.844 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:23.844 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:23.844 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:23.844 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.844 #2 INITED exec/s: 0 rss: 63Mb 00:07:23.844 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.844 This may also happen if the target rejected all inputs we tried so far 00:07:24.104 [2024-05-13 02:48:14.648500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.104 [2024-05-13 02:48:14.648528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.104 [2024-05-13 02:48:14.648591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.104 [2024-05-13 02:48:14.648605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.362 NEW_FUNC[1/685]: 0x4a7010 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:24.362 NEW_FUNC[2/685]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.362 #6 NEW cov: 11794 ft: 11792 corp: 2/19b lim: 35 exec/s: 0 rss: 70Mb L: 18/18 MS: 4 InsertByte-ChangeBinInt-ChangeBinInt-InsertRepeatedBytes- 00:07:24.362 [2024-05-13 02:48:14.969148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.362 [2024-05-13 02:48:14.969190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:14.969253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:60600061 cdw11:60006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:14.969272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.363 #7 NEW cov: 11924 ft: 12513 corp: 3/37b lim: 35 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 ChangeBinInt- 00:07:24.363 [2024-05-13 02:48:15.019177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.019204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.019258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:60600061 cdw11:30006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.019271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.363 #8 NEW cov: 11930 ft: 12733 corp: 4/56b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 InsertByte- 00:07:24.363 [2024-05-13 02:48:15.059632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.059657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.059712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.059725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.059778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.059792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.059846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.059859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.059910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.059923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.363 #9 NEW cov: 12015 ft: 13595 corp: 5/91b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:24.363 [2024-05-13 02:48:15.099836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.099861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.099918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.099932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.099984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.099997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.100048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.100060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.100111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.100124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.363 #10 NEW cov: 12015 ft: 13663 corp: 6/126b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:24.363 [2024-05-13 02:48:15.149526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.149552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.363 [2024-05-13 02:48:15.149606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:60600061 cdw11:30006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.363 [2024-05-13 02:48:15.149620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 #11 NEW cov: 12015 ft: 13713 corp: 7/145b lim: 35 exec/s: 0 rss: 70Mb L: 19/35 MS: 1 ChangeBit- 00:07:24.622 [2024-05-13 02:48:15.189684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.189709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.189764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:60600061 cdw11:300060e0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.189777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 #12 NEW cov: 12015 ft: 13800 corp: 8/164b lim: 35 exec/s: 0 rss: 70Mb L: 19/35 MS: 1 ChangeBit- 00:07:24.622 [2024-05-13 02:48:15.230180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.230205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.230258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.230272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.230324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.230338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.230393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.230407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.230457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.230470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.622 #13 NEW cov: 12015 ft: 13862 corp: 9/199b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:24.622 [2024-05-13 02:48:15.280276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.280301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.280356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.280370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.280426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:f700ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.280440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.280492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.280505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.280557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.280569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.622 #14 NEW cov: 12015 ft: 13896 corp: 10/234b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:07:24.622 [2024-05-13 02:48:15.320046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.320071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.320125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d5600061 cdw11:e0006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.320139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 #15 NEW cov: 12015 ft: 13949 corp: 11/254b lim: 35 exec/s: 0 rss: 71Mb L: 20/35 MS: 1 InsertByte- 00:07:24.622 [2024-05-13 02:48:15.360152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.360176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.360247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f9600061 cdw11:30006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.360262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 #16 NEW cov: 12015 ft: 13972 corp: 12/273b lim: 35 exec/s: 0 rss: 71Mb L: 19/35 MS: 1 ChangeByte- 00:07:24.622 [2024-05-13 02:48:15.400397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.400421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.400492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:60600061 cdw11:30006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.400507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.622 [2024-05-13 02:48:15.400572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:a8a800a8 cdw11:9d00605d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.622 [2024-05-13 02:48:15.400586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.882 #17 NEW cov: 12015 ft: 14175 corp: 13/295b lim: 35 exec/s: 0 rss: 71Mb L: 22/35 MS: 1 InsertRepeatedBytes- 00:07:24.882 [2024-05-13 02:48:15.450816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.450840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.450896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.450910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.450962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.450975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.451029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.451042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.451095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ff1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.451108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.882 #18 NEW cov: 12015 ft: 14201 corp: 14/330b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:24.882 [2024-05-13 02:48:15.490861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.490886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.490942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.490955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.491009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.491022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.491075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.491091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.491144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.491158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.882 #19 NEW cov: 12015 ft: 14209 corp: 15/365b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:07:24.882 [2024-05-13 02:48:15.530991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.531015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.531070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.531083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.531136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.531149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.531202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.531214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.531267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.531280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.882 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:24.882 #20 NEW cov: 12038 ft: 14306 corp: 16/400b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:07:24.882 [2024-05-13 02:48:15.581155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.581180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.581233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.581247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.581300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.581313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.581365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.882 [2024-05-13 02:48:15.581378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.882 [2024-05-13 02:48:15.581436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.581452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.883 #21 NEW cov: 12038 ft: 14327 corp: 17/435b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:24.883 [2024-05-13 02:48:15.621256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.621280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.883 [2024-05-13 02:48:15.621336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.621350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.883 [2024-05-13 02:48:15.621405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.621419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.883 [2024-05-13 02:48:15.621471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.621485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.883 [2024-05-13 02:48:15.621537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.621550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.883 #22 NEW cov: 12038 ft: 14405 corp: 18/470b lim: 35 exec/s: 22 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:24.883 [2024-05-13 02:48:15.671161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.671186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.883 [2024-05-13 02:48:15.671244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ef00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.671257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.883 [2024-05-13 02:48:15.671311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.883 [2024-05-13 02:48:15.671324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.142 #23 NEW cov: 12038 ft: 14418 corp: 19/497b lim: 35 exec/s: 23 rss: 71Mb L: 27/35 MS: 1 EraseBytes- 00:07:25.142 [2024-05-13 02:48:15.711537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:40ff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.711562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.711620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.711633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.711687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.711704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.711756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.711769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.711824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.711837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.142 #24 NEW cov: 12038 ft: 14458 corp: 20/532b lim: 35 exec/s: 24 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:25.142 [2024-05-13 02:48:15.761393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.761418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.761472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6160009f cdw11:60006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.761486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.761537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:a8a80060 cdw11:5d00a860 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.761550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.142 #25 NEW cov: 12038 ft: 14469 corp: 21/555b lim: 35 exec/s: 25 rss: 72Mb L: 23/35 MS: 1 InsertByte- 00:07:25.142 [2024-05-13 02:48:15.801392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.801416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.801469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.801482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.142 #26 NEW cov: 12038 ft: 14523 corp: 22/569b lim: 35 exec/s: 26 rss: 72Mb L: 14/35 MS: 1 EraseBytes- 00:07:25.142 [2024-05-13 02:48:15.841646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:179f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.841672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.841724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6160009f cdw11:60006060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.841738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.142 [2024-05-13 02:48:15.841792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:a8a80060 cdw11:5d00a860 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.142 [2024-05-13 02:48:15.841807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.142 #27 NEW cov: 12038 ft: 14586 corp: 23/592b lim: 35 exec/s: 27 rss: 72Mb L: 23/35 MS: 1 ChangeBinInt- 00:07:25.142 [2024-05-13 02:48:15.881755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.881785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.143 [2024-05-13 02:48:15.881855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6060009f cdw11:60006061 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.881869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.143 [2024-05-13 02:48:15.881925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:a8a80060 cdw11:5d00a860 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.881939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.143 #28 NEW cov: 12038 ft: 14602 corp: 24/615b lim: 35 exec/s: 28 rss: 72Mb L: 23/35 MS: 1 ShuffleBytes- 00:07:25.143 [2024-05-13 02:48:15.922133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.922158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.143 [2024-05-13 02:48:15.922212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.922226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.143 [2024-05-13 02:48:15.922276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.922288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.143 [2024-05-13 02:48:15.922339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.922352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.143 [2024-05-13 02:48:15.922407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff003bff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.143 [2024-05-13 02:48:15.922420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.143 #29 NEW cov: 12038 ft: 14643 corp: 25/650b lim: 35 exec/s: 29 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:25.402 [2024-05-13 02:48:15.962234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:15.962259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:15.962311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:15.962324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:15.962375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:15.962393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:15.962445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:15.962460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:15.962514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:15.962527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.402 #30 NEW cov: 12038 ft: 14648 corp: 26/685b lim: 35 exec/s: 30 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:25.402 [2024-05-13 02:48:16.002228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.002253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.002309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.002322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.002375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.002393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.002448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.002461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.402 #31 NEW cov: 12038 ft: 14678 corp: 27/718b lim: 35 exec/s: 31 rss: 72Mb L: 33/35 MS: 1 EraseBytes- 00:07:25.402 [2024-05-13 02:48:16.042114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:61009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.042138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.042191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:60600060 cdw11:60003060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.042205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.402 #32 NEW cov: 12038 ft: 14694 corp: 28/735b lim: 35 exec/s: 32 rss: 72Mb L: 17/35 MS: 1 EraseBytes- 00:07:25.402 [2024-05-13 02:48:16.082608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.082632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.082683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.082697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.082749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.082763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.082812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.082824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.402 [2024-05-13 02:48:16.082881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.402 [2024-05-13 02:48:16.082895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.403 #33 NEW cov: 12038 ft: 14704 corp: 29/770b lim: 35 exec/s: 33 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:25.403 [2024-05-13 02:48:16.132751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.132776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.132827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.132840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.132889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.132903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.132954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff0040 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.132967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.133017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.133030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.403 #39 NEW cov: 12038 ft: 14721 corp: 30/805b lim: 35 exec/s: 39 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:07:25.403 [2024-05-13 02:48:16.172848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.172872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.172924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.172938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.172988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.173002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.173052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.173065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.403 [2024-05-13 02:48:16.173117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.403 [2024-05-13 02:48:16.173130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.403 #40 NEW cov: 12038 ft: 14722 corp: 31/840b lim: 35 exec/s: 40 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:25.662 [2024-05-13 02:48:16.212942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.212966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.213018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.213031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.213081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.213094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.213143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.213156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.213206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.213219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.662 #41 NEW cov: 12038 ft: 14767 corp: 32/875b lim: 35 exec/s: 41 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:25.662 [2024-05-13 02:48:16.263073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.263098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.263152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.263166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.263218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.263232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.263284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.662 [2024-05-13 02:48:16.263297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.662 [2024-05-13 02:48:16.263349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.263362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.663 #42 NEW cov: 12038 ft: 14781 corp: 33/910b lim: 35 exec/s: 42 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:25.663 [2024-05-13 02:48:16.303024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9f9f009f cdw11:9f009f9f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.303050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.303107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9f9f009f cdw11:9f009f25 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.303124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.303178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:609f0060 cdw11:6000259f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.303192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.303245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:61600060 cdw11:a8003060 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.303259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.663 #43 NEW cov: 12038 ft: 14784 corp: 34/944b lim: 35 exec/s: 43 rss: 73Mb L: 34/35 MS: 1 CopyPart- 00:07:25.663 [2024-05-13 02:48:16.353343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.353368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.353427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.353441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.353494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.353507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.353559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.353572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.353623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.353636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.663 #44 NEW cov: 12038 ft: 14795 corp: 35/979b lim: 35 exec/s: 44 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:25.663 [2024-05-13 02:48:16.403491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.403516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.403570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.403584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.403635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.403649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.403698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ef cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.403714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.403767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.403780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.663 #45 NEW cov: 12038 ft: 14816 corp: 36/1014b lim: 35 exec/s: 45 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:25.663 [2024-05-13 02:48:16.443601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.443626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.443679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff8900ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.443692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.443744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.443757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.443809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.443822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.663 [2024-05-13 02:48:16.443875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.663 [2024-05-13 02:48:16.443889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.922 [2024-05-13 02:48:16.493777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.922 [2024-05-13 02:48:16.493802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.922 [2024-05-13 02:48:16.493871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff8900ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.922 [2024-05-13 02:48:16.493885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.922 [2024-05-13 02:48:16.493937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.922 [2024-05-13 02:48:16.493951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.494002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.494016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.494069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.494083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.923 #47 NEW cov: 12038 ft: 14825 corp: 37/1049b lim: 35 exec/s: 47 rss: 73Mb L: 35/35 MS: 2 ChangeByte-ShuffleBytes- 00:07:25.923 [2024-05-13 02:48:16.533583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:feff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.533609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.533666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ef00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.533680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.533733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.533746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.923 #48 NEW cov: 12038 ft: 14858 corp: 38/1076b lim: 35 exec/s: 48 rss: 73Mb L: 27/35 MS: 1 CopyPart- 00:07:25.923 [2024-05-13 02:48:16.573658] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:25.923 [2024-05-13 02:48:16.573989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.574030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.574085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.574099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.574152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.574166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.574218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:23ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.574233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.574284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ff1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.574298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.923 #49 NEW cov: 12047 ft: 14885 corp: 39/1111b lim: 35 exec/s: 49 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:25.923 [2024-05-13 02:48:16.624088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.624113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.624166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.624180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.624233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:05ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.624246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.624301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.624315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.923 [2024-05-13 02:48:16.624366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.923 [2024-05-13 02:48:16.624384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.923 #50 NEW cov: 12047 ft: 14892 corp: 40/1146b lim: 35 exec/s: 25 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:25.923 #50 DONE cov: 12047 ft: 14892 corp: 40/1146b lim: 35 exec/s: 25 rss: 73Mb 00:07:25.923 Done 50 runs in 2 second(s) 00:07:25.923 [2024-05-13 02:48:16.644653] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.182 02:48:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:26.182 [2024-05-13 02:48:16.807921] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:26.182 [2024-05-13 02:48:16.807991] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498805 ] 00:07:26.182 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.440 [2024-05-13 02:48:17.022852] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.440 [2024-05-13 02:48:17.060518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.440 [2024-05-13 02:48:17.091233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.440 [2024-05-13 02:48:17.143689] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.440 [2024-05-13 02:48:17.159648] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:26.440 [2024-05-13 02:48:17.160027] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:26.440 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.440 INFO: Seed: 213229545 00:07:26.440 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:26.440 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:26.440 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:26.440 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.440 #2 INITED exec/s: 0 rss: 63Mb 00:07:26.440 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.440 This may also happen if the target rejected all inputs we tried so far 00:07:26.955 NEW_FUNC[1/673]: 0x4a8ce0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:26.955 NEW_FUNC[2/673]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:26.955 #4 NEW cov: 11693 ft: 11702 corp: 2/10b lim: 20 exec/s: 0 rss: 70Mb L: 9/9 MS: 2 ChangeBinInt-CMP- DE: "\000\2040\222t\274\263t"- 00:07:26.955 NEW_FUNC[1/1]: 0x17d0a80 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:171 00:07:26.955 #7 NEW cov: 11835 ft: 12632 corp: 3/15b lim: 20 exec/s: 0 rss: 70Mb L: 5/9 MS: 3 CMP-ShuffleBytes-CrossOver- DE: "\177\000"- 00:07:26.955 #13 NEW cov: 11841 ft: 12800 corp: 4/24b lim: 20 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CMP- DE: "\001\000\002\000"- 00:07:26.955 #14 NEW cov: 11926 ft: 12977 corp: 5/33b lim: 20 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:27.214 #15 NEW cov: 11943 ft: 13372 corp: 6/52b lim: 20 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:27.214 #16 NEW cov: 11943 ft: 13506 corp: 7/61b lim: 20 exec/s: 0 rss: 70Mb L: 9/19 MS: 1 PersAutoDict- DE: "\001\000\002\000"- 00:07:27.214 #17 NEW cov: 11943 ft: 13622 corp: 8/70b lim: 20 exec/s: 0 rss: 70Mb L: 9/19 MS: 1 CrossOver- 00:07:27.214 #18 NEW cov: 11943 ft: 13728 corp: 9/80b lim: 20 exec/s: 0 rss: 70Mb L: 10/19 MS: 1 InsertByte- 00:07:27.214 #19 NEW cov: 11943 ft: 13747 corp: 10/89b lim: 20 exec/s: 0 rss: 70Mb L: 9/19 MS: 1 ChangeBit- 00:07:27.473 #20 NEW cov: 11943 ft: 13771 corp: 11/98b lim: 20 exec/s: 0 rss: 70Mb L: 9/19 MS: 1 ChangeByte- 00:07:27.473 #21 NEW cov: 11943 ft: 13818 corp: 12/107b lim: 20 exec/s: 0 rss: 70Mb L: 9/19 MS: 1 CMP- DE: ",\257\246\374\265\006\000\000"- 00:07:27.473 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:27.473 #22 NEW cov: 11966 ft: 13872 corp: 13/124b lim: 20 exec/s: 0 rss: 70Mb L: 17/19 MS: 1 CMP- DE: "\247C}\366\2220\204\000"- 00:07:27.473 #23 NEW cov: 11966 ft: 13941 corp: 14/141b lim: 20 exec/s: 0 rss: 70Mb L: 17/19 MS: 1 ChangeByte- 00:07:27.473 #24 NEW cov: 11966 ft: 13984 corp: 15/150b lim: 20 exec/s: 24 rss: 70Mb L: 9/19 MS: 1 CopyPart- 00:07:27.733 #25 NEW cov: 11966 ft: 14046 corp: 16/159b lim: 20 exec/s: 25 rss: 70Mb L: 9/19 MS: 1 ChangeByte- 00:07:27.733 #29 NEW cov: 11966 ft: 14159 corp: 17/165b lim: 20 exec/s: 29 rss: 70Mb L: 6/19 MS: 4 ChangeBinInt-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:07:27.733 #30 NEW cov: 11966 ft: 14178 corp: 18/182b lim: 20 exec/s: 30 rss: 70Mb L: 17/19 MS: 1 ChangeBit- 00:07:27.733 #31 NEW cov: 11966 ft: 14202 corp: 19/199b lim: 20 exec/s: 31 rss: 70Mb L: 17/19 MS: 1 ShuffleBytes- 00:07:27.733 #32 NEW cov: 11966 ft: 14209 corp: 20/208b lim: 20 exec/s: 32 rss: 71Mb L: 9/19 MS: 1 CopyPart- 00:07:27.991 #33 NEW cov: 11966 ft: 14246 corp: 21/217b lim: 20 exec/s: 33 rss: 71Mb L: 9/19 MS: 1 PersAutoDict- DE: ",\257\246\374\265\006\000\000"- 00:07:27.991 #34 NEW cov: 11966 ft: 14273 corp: 22/226b lim: 20 exec/s: 34 rss: 71Mb L: 9/19 MS: 1 ChangeBit- 00:07:27.991 #35 NEW cov: 11966 ft: 14336 corp: 23/246b lim: 20 exec/s: 35 rss: 71Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:27.991 #36 NEW cov: 11966 ft: 14426 corp: 24/263b lim: 20 exec/s: 36 rss: 71Mb L: 17/20 MS: 1 CopyPart- 00:07:27.991 #37 NEW cov: 11966 ft: 14449 corp: 25/272b lim: 20 exec/s: 37 rss: 71Mb L: 9/20 MS: 1 ChangeByte- 00:07:28.250 #38 NEW cov: 11966 ft: 14460 corp: 26/290b lim: 20 exec/s: 38 rss: 71Mb L: 18/20 MS: 1 InsertByte- 00:07:28.250 #39 NEW cov: 11966 ft: 14468 corp: 27/307b lim: 20 exec/s: 39 rss: 71Mb L: 17/20 MS: 1 ShuffleBytes- 00:07:28.250 #40 NEW cov: 11966 ft: 14481 corp: 28/316b lim: 20 exec/s: 40 rss: 71Mb L: 9/20 MS: 1 CrossOver- 00:07:28.250 #41 NEW cov: 11966 ft: 14496 corp: 29/325b lim: 20 exec/s: 41 rss: 71Mb L: 9/20 MS: 1 ChangeBinInt- 00:07:28.250 #42 NEW cov: 11966 ft: 14507 corp: 30/336b lim: 20 exec/s: 42 rss: 71Mb L: 11/20 MS: 1 EraseBytes- 00:07:28.509 #43 NEW cov: 11966 ft: 14526 corp: 31/345b lim: 20 exec/s: 43 rss: 72Mb L: 9/20 MS: 1 ShuffleBytes- 00:07:28.509 #44 NEW cov: 11966 ft: 14535 corp: 32/354b lim: 20 exec/s: 44 rss: 72Mb L: 9/20 MS: 1 ChangeBit- 00:07:28.509 #45 NEW cov: 11966 ft: 14618 corp: 33/372b lim: 20 exec/s: 45 rss: 72Mb L: 18/20 MS: 1 ShuffleBytes- 00:07:28.509 #46 NEW cov: 11966 ft: 14688 corp: 34/382b lim: 20 exec/s: 23 rss: 72Mb L: 10/20 MS: 1 InsertByte- 00:07:28.509 #46 DONE cov: 11966 ft: 14688 corp: 34/382b lim: 20 exec/s: 23 rss: 72Mb 00:07:28.509 ###### Recommended dictionary. ###### 00:07:28.509 "\000\2040\222t\274\263t" # Uses: 0 00:07:28.509 "\177\000" # Uses: 0 00:07:28.509 "\001\000\002\000" # Uses: 1 00:07:28.509 ",\257\246\374\265\006\000\000" # Uses: 1 00:07:28.509 "\247C}\366\2220\204\000" # Uses: 0 00:07:28.509 ###### End of recommended dictionary. ###### 00:07:28.509 Done 46 runs in 2 second(s) 00:07:28.509 [2024-05-13 02:48:19.244224] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:28.768 02:48:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:28.768 [2024-05-13 02:48:19.406175] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:28.768 [2024-05-13 02:48:19.406243] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499113 ] 00:07:28.768 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.027 [2024-05-13 02:48:19.626002] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.027 [2024-05-13 02:48:19.666208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.027 [2024-05-13 02:48:19.695679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.027 [2024-05-13 02:48:19.748301] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.027 [2024-05-13 02:48:19.764254] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:29.027 [2024-05-13 02:48:19.764657] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:29.027 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.027 INFO: Seed: 2816243795 00:07:29.027 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:29.027 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:29.027 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:29.027 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.027 #2 INITED exec/s: 0 rss: 63Mb 00:07:29.027 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.027 This may also happen if the target rejected all inputs we tried so far 00:07:29.285 [2024-05-13 02:48:19.835760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-05-13 02:48:19.835798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.285 [2024-05-13 02:48:19.835881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-05-13 02:48:19.835897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.285 [2024-05-13 02:48:19.835971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-05-13 02:48:19.835985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.285 [2024-05-13 02:48:19.836059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.285 [2024-05-13 02:48:19.836074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.544 NEW_FUNC[1/686]: 0x4a9dd0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:29.544 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.544 #20 NEW cov: 11815 ft: 11814 corp: 2/30b lim: 35 exec/s: 0 rss: 70Mb L: 29/29 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:29.544 [2024-05-13 02:48:20.176110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.176156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.176305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.176328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.176474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.176494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.176632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.176652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.544 #26 NEW cov: 11945 ft: 12542 corp: 3/63b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CMP- DE: "\000\000\002\000"- 00:07:29.544 [2024-05-13 02:48:20.236041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.236067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.236197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.236215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.236348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.236366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.236502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffff23 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.236522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.544 #27 NEW cov: 11951 ft: 12874 corp: 4/93b lim: 35 exec/s: 0 rss: 70Mb L: 30/33 MS: 1 InsertByte- 00:07:29.544 [2024-05-13 02:48:20.286215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.286245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.286384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff07 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.286402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.286551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.286569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.286701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.286719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.544 #28 NEW cov: 12036 ft: 13138 corp: 5/126b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CMP- DE: "\007\000\000\000"- 00:07:29.544 [2024-05-13 02:48:20.346482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.346508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.346643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff07 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.346662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.346801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.346819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.544 [2024-05-13 02:48:20.346965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.544 [2024-05-13 02:48:20.346983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.803 #29 NEW cov: 12036 ft: 13208 corp: 6/158b lim: 35 exec/s: 0 rss: 70Mb L: 32/33 MS: 1 EraseBytes- 00:07:29.803 [2024-05-13 02:48:20.406659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.803 [2024-05-13 02:48:20.406687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.803 [2024-05-13 02:48:20.406825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.803 [2024-05-13 02:48:20.406844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.803 [2024-05-13 02:48:20.406982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.803 [2024-05-13 02:48:20.406999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.803 [2024-05-13 02:48:20.407141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.803 [2024-05-13 02:48:20.407159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.803 #30 NEW cov: 12036 ft: 13285 corp: 7/188b lim: 35 exec/s: 0 rss: 70Mb L: 30/33 MS: 1 ShuffleBytes- 00:07:29.804 [2024-05-13 02:48:20.466732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.466759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.466905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0002 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.466924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.467064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.467083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.467222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.467241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.804 #31 NEW cov: 12036 ft: 13326 corp: 8/222b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 PersAutoDict- DE: "\000\000\002\000"- 00:07:29.804 [2024-05-13 02:48:20.516994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.517022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.517169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:feffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.517189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.517330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.517346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.517488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.517508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.804 #32 NEW cov: 12036 ft: 13439 corp: 9/252b lim: 35 exec/s: 0 rss: 70Mb L: 30/34 MS: 1 ChangeBit- 00:07:29.804 [2024-05-13 02:48:20.576788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.576818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.576963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff07 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.576981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.804 [2024-05-13 02:48:20.577135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.804 [2024-05-13 02:48:20.577154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.804 #33 NEW cov: 12036 ft: 13767 corp: 10/273b lim: 35 exec/s: 0 rss: 70Mb L: 21/34 MS: 1 EraseBytes- 00:07:30.063 [2024-05-13 02:48:20.627537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.627576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.627714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0002 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.627733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.627872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.627894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.628037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.628057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.063 #34 NEW cov: 12036 ft: 13822 corp: 11/307b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 ShuffleBytes- 00:07:30.063 [2024-05-13 02:48:20.687810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.687836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.687975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:feffffff cdw11:07000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.687994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.688133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.688153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.688285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.688303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.063 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:30.063 #35 NEW cov: 12059 ft: 13862 corp: 12/337b lim: 35 exec/s: 0 rss: 70Mb L: 30/34 MS: 1 ChangeBinInt- 00:07:30.063 [2024-05-13 02:48:20.747421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.747448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.747576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffd0ff07 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.747593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.747719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.747748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.063 #36 NEW cov: 12059 ft: 13893 corp: 13/358b lim: 35 exec/s: 0 rss: 70Mb L: 21/34 MS: 1 ChangeByte- 00:07:30.063 [2024-05-13 02:48:20.807970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.807996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.808128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff07 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.808145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.808269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.808289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.063 [2024-05-13 02:48:20.808425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.063 [2024-05-13 02:48:20.808444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.063 #37 NEW cov: 12059 ft: 13928 corp: 14/390b lim: 35 exec/s: 37 rss: 71Mb L: 32/34 MS: 1 ChangeBit- 00:07:30.322 [2024-05-13 02:48:20.868209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffdfffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.868240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:20.868371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0002 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.868395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:20.868534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.868552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:20.868696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.868715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.322 #38 NEW cov: 12059 ft: 13945 corp: 15/424b lim: 35 exec/s: 38 rss: 71Mb L: 34/34 MS: 1 ChangeBit- 00:07:30.322 [2024-05-13 02:48:20.927320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.927349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.322 #39 NEW cov: 12059 ft: 14703 corp: 16/433b lim: 35 exec/s: 39 rss: 71Mb L: 9/34 MS: 1 InsertRepeatedBytes- 00:07:30.322 [2024-05-13 02:48:20.987896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.987927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:20.988071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffd0ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:20.988090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.322 #40 NEW cov: 12059 ft: 14936 corp: 17/451b lim: 35 exec/s: 40 rss: 71Mb L: 18/34 MS: 1 EraseBytes- 00:07:30.322 [2024-05-13 02:48:21.048740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.048768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:21.048903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:f7ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.048923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:21.049051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.049069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:21.049202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.049229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.322 #41 NEW cov: 12059 ft: 14943 corp: 18/484b lim: 35 exec/s: 41 rss: 71Mb L: 33/34 MS: 1 ChangeBit- 00:07:30.322 [2024-05-13 02:48:21.098885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.098914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:21.099047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.099065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:21.099197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.099216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.322 [2024-05-13 02:48:21.099355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.322 [2024-05-13 02:48:21.099373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #42 NEW cov: 12059 ft: 14994 corp: 19/514b lim: 35 exec/s: 42 rss: 71Mb L: 30/34 MS: 1 EraseBytes- 00:07:30.582 [2024-05-13 02:48:21.159125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.159153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.159277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.159298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.159428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.159447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.159578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.159597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #43 NEW cov: 12059 ft: 15013 corp: 20/544b lim: 35 exec/s: 43 rss: 71Mb L: 30/34 MS: 1 ChangeBinInt- 00:07:30.582 [2024-05-13 02:48:21.219232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.219261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.219396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00009900 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.219415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.219556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.219578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.219709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.219728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.582 #44 NEW cov: 12059 ft: 15018 corp: 21/575b lim: 35 exec/s: 44 rss: 71Mb L: 31/34 MS: 1 InsertByte- 00:07:30.582 [2024-05-13 02:48:21.279581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.279608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.582 [2024-05-13 02:48:21.279734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff07 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.582 [2024-05-13 02:48:21.279753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.279885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.279905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.280039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.280057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.583 #45 NEW cov: 12059 ft: 15034 corp: 22/607b lim: 35 exec/s: 45 rss: 71Mb L: 32/34 MS: 1 ShuffleBytes- 00:07:30.583 [2024-05-13 02:48:21.329916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:7f7f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.329945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.330084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff7f7f cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.330105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.330236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.330254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.330399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff23ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.330417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.330563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.330582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.583 #46 NEW cov: 12059 ft: 15081 corp: 23/642b lim: 35 exec/s: 46 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:30.583 [2024-05-13 02:48:21.379695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.379725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.379855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffe0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.379872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.380006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.380027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.583 [2024-05-13 02:48:21.380157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffff23 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.583 [2024-05-13 02:48:21.380175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.842 #47 NEW cov: 12059 ft: 15104 corp: 24/675b lim: 35 exec/s: 47 rss: 71Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:30.843 [2024-05-13 02:48:21.429894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.429920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.430054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.430073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.430214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.430233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.430368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff01ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.430390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.843 #48 NEW cov: 12059 ft: 15110 corp: 25/708b lim: 35 exec/s: 48 rss: 71Mb L: 33/35 MS: 1 ChangeBinInt- 00:07:30.843 [2024-05-13 02:48:21.480047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffdff9ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.480075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.480211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0002 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.480231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.480369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.480390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.480537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff230003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.480554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.843 #49 NEW cov: 12059 ft: 15168 corp: 26/742b lim: 35 exec/s: 49 rss: 71Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:30.843 [2024-05-13 02:48:21.540247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.540276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.540414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffff07 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.540444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.540570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:02000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.540587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.540714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:07ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.540733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.843 #50 NEW cov: 12059 ft: 15222 corp: 27/772b lim: 35 exec/s: 50 rss: 72Mb L: 30/35 MS: 1 CopyPart- 00:07:30.843 [2024-05-13 02:48:21.590748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:02000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.590775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.590912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffff7f7f cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.590932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.591068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.591087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.591215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff23ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.591234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.843 [2024-05-13 02:48:21.591365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.843 [2024-05-13 02:48:21.591388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.843 #51 NEW cov: 12059 ft: 15309 corp: 28/807b lim: 35 exec/s: 51 rss: 72Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000\002\000"- 00:07:31.102 [2024-05-13 02:48:21.649628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.102 [2024-05-13 02:48:21.649656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.103 #52 NEW cov: 12059 ft: 15367 corp: 29/816b lim: 35 exec/s: 52 rss: 72Mb L: 9/35 MS: 1 ShuffleBytes- 00:07:31.103 [2024-05-13 02:48:21.710783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.710813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.710944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.710963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.711107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.711125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.711256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:ff000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.711274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.103 #53 NEW cov: 12059 ft: 15478 corp: 30/846b lim: 35 exec/s: 53 rss: 72Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:31.103 [2024-05-13 02:48:21.760963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.760991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.761137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0002ff00 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.761156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.761298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.761316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.761460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:7f7f0200 cdw11:7fff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.761478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.103 #54 NEW cov: 12059 ft: 15484 corp: 31/880b lim: 35 exec/s: 54 rss: 72Mb L: 34/35 MS: 1 CrossOver- 00:07:31.103 [2024-05-13 02:48:21.821053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffdfff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.821078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.821204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000ff07 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.821223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.821354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.821372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.103 [2024-05-13 02:48:21.821523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.103 [2024-05-13 02:48:21.821542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.103 #55 NEW cov: 12059 ft: 15526 corp: 32/913b lim: 35 exec/s: 27 rss: 72Mb L: 33/35 MS: 1 ChangeBit- 00:07:31.103 #55 DONE cov: 12059 ft: 15526 corp: 32/913b lim: 35 exec/s: 27 rss: 72Mb 00:07:31.103 ###### Recommended dictionary. ###### 00:07:31.103 "\000\000\002\000" # Uses: 2 00:07:31.103 "\007\000\000\000" # Uses: 0 00:07:31.103 ###### End of recommended dictionary. ###### 00:07:31.103 Done 55 runs in 2 second(s) 00:07:31.103 [2024-05-13 02:48:21.841729] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.363 02:48:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:31.363 [2024-05-13 02:48:22.004569] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:31.363 [2024-05-13 02:48:22.004634] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499636 ] 00:07:31.363 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.622 [2024-05-13 02:48:22.221060] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.622 [2024-05-13 02:48:22.258313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.622 [2024-05-13 02:48:22.287465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.622 [2024-05-13 02:48:22.339798] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.622 [2024-05-13 02:48:22.355748] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:31.622 [2024-05-13 02:48:22.356148] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:31.622 INFO: Running with entropic power schedule (0xFF, 100). 00:07:31.622 INFO: Seed: 1115273275 00:07:31.622 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:31.622 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:31.622 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:31.622 INFO: A corpus is not provided, starting from an empty corpus 00:07:31.622 #2 INITED exec/s: 0 rss: 63Mb 00:07:31.622 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:31.622 This may also happen if the target rejected all inputs we tried so far 00:07:31.622 [2024-05-13 02:48:22.421638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.622 [2024-05-13 02:48:22.421667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.622 [2024-05-13 02:48:22.421721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.622 [2024-05-13 02:48:22.421735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.140 NEW_FUNC[1/686]: 0x4abf60 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:32.140 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.140 #6 NEW cov: 11823 ft: 11825 corp: 2/23b lim: 45 exec/s: 0 rss: 70Mb L: 22/22 MS: 4 ShuffleBytes-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:32.140 [2024-05-13 02:48:22.752782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.752839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.752920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.752947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.753024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.753049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.140 #7 NEW cov: 11956 ft: 12761 corp: 3/51b lim: 45 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 CrossOver- 00:07:32.140 [2024-05-13 02:48:22.802659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.802687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.802755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.802769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.802822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:24900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.802836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.140 #13 NEW cov: 11962 ft: 13088 corp: 4/79b lim: 45 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 ChangeByte- 00:07:32.140 [2024-05-13 02:48:22.842728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.842756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.842811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.842825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.842876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:0a900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.842889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.140 #14 NEW cov: 12047 ft: 13425 corp: 5/110b lim: 45 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 CrossOver- 00:07:32.140 [2024-05-13 02:48:22.882870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.882896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.882965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.882980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.883033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:24900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.883047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.140 #20 NEW cov: 12047 ft: 13487 corp: 6/138b lim: 45 exec/s: 0 rss: 70Mb L: 28/31 MS: 1 ChangeByte- 00:07:32.140 [2024-05-13 02:48:22.922947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a902b0a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.140 [2024-05-13 02:48:22.922971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.140 [2024-05-13 02:48:22.923024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.141 [2024-05-13 02:48:22.923037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.141 [2024-05-13 02:48:22.923089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:24900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.141 [2024-05-13 02:48:22.923102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.400 #21 NEW cov: 12047 ft: 13629 corp: 7/166b lim: 45 exec/s: 0 rss: 70Mb L: 28/31 MS: 1 ShuffleBytes- 00:07:32.400 [2024-05-13 02:48:22.972933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.400 [2024-05-13 02:48:22.972957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.400 [2024-05-13 02:48:22.973008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:22.973021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.401 #22 NEW cov: 12047 ft: 13669 corp: 8/189b lim: 45 exec/s: 0 rss: 70Mb L: 23/31 MS: 1 EraseBytes- 00:07:32.401 [2024-05-13 02:48:23.013196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.013220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.013277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.013291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.013360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:900a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.013374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.401 #23 NEW cov: 12047 ft: 13709 corp: 9/221b lim: 45 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertByte- 00:07:32.401 [2024-05-13 02:48:23.063491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f80af8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.063516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.063570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.063583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.063634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.063647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.063698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.063710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.401 #26 NEW cov: 12047 ft: 14111 corp: 10/263b lim: 45 exec/s: 0 rss: 70Mb L: 42/42 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:32.401 [2024-05-13 02:48:23.103582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f80af8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.103607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.103679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:2a00f8f8 cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.103692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.103743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.103756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.103810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.103823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.401 #27 NEW cov: 12047 ft: 14154 corp: 11/305b lim: 45 exec/s: 0 rss: 70Mb L: 42/42 MS: 1 ChangeBinInt- 00:07:32.401 [2024-05-13 02:48:23.153252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.153276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.401 #28 NEW cov: 12047 ft: 14897 corp: 12/316b lim: 45 exec/s: 0 rss: 70Mb L: 11/42 MS: 1 EraseBytes- 00:07:32.401 [2024-05-13 02:48:23.193692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.193717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.193774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90908d90 cdw11:90900003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.193787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.401 [2024-05-13 02:48:23.193839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:900a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.401 [2024-05-13 02:48:23.193852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.660 #29 NEW cov: 12047 ft: 14905 corp: 13/348b lim: 45 exec/s: 0 rss: 70Mb L: 32/42 MS: 1 ChangeBinInt- 00:07:32.660 [2024-05-13 02:48:23.243830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a2bff1e cdw11:0a900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.243854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.660 [2024-05-13 02:48:23.243909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.243923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.660 [2024-05-13 02:48:23.243977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90900000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.243990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.660 #30 NEW cov: 12047 ft: 14922 corp: 14/381b lim: 45 exec/s: 0 rss: 70Mb L: 33/42 MS: 1 CMP- DE: "\377\036"- 00:07:32.660 [2024-05-13 02:48:23.283941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.283965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.660 [2024-05-13 02:48:23.284021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90908d90 cdw11:90900003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.284034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.660 [2024-05-13 02:48:23.284105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:900a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.284119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.660 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:32.660 #31 NEW cov: 12070 ft: 14984 corp: 15/413b lim: 45 exec/s: 0 rss: 70Mb L: 32/42 MS: 1 ChangeBit- 00:07:32.660 [2024-05-13 02:48:23.333751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90900a90 cdw11:2b900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.333779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.660 #32 NEW cov: 12070 ft: 15029 corp: 16/424b lim: 45 exec/s: 0 rss: 70Mb L: 11/42 MS: 1 ShuffleBytes- 00:07:32.660 [2024-05-13 02:48:23.384354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f8fcf8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.660 [2024-05-13 02:48:23.384383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.661 [2024-05-13 02:48:23.384438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.661 [2024-05-13 02:48:23.384452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.661 [2024-05-13 02:48:23.384507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.661 [2024-05-13 02:48:23.384519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.661 [2024-05-13 02:48:23.384572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.661 [2024-05-13 02:48:23.384585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.661 #35 NEW cov: 12070 ft: 15086 corp: 17/467b lim: 45 exec/s: 35 rss: 70Mb L: 43/43 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:07:32.661 [2024-05-13 02:48:23.424362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a902b0a cdw11:90ff0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.661 [2024-05-13 02:48:23.424392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.661 [2024-05-13 02:48:23.424461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.661 [2024-05-13 02:48:23.424475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.661 [2024-05-13 02:48:23.424527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90240004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.661 [2024-05-13 02:48:23.424541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.661 #36 NEW cov: 12070 ft: 15123 corp: 18/496b lim: 45 exec/s: 36 rss: 70Mb L: 29/43 MS: 1 InsertByte- 00:07:32.920 [2024-05-13 02:48:23.474497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.474522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.474578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.474592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.474646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:24900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.474660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.920 #37 NEW cov: 12070 ft: 15158 corp: 19/524b lim: 45 exec/s: 37 rss: 70Mb L: 28/43 MS: 1 ChangeBinInt- 00:07:32.920 [2024-05-13 02:48:23.514589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.514613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.514668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90908d90 cdw11:90900003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.514681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.514734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:900a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.514747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.920 #38 NEW cov: 12070 ft: 15167 corp: 20/559b lim: 45 exec/s: 38 rss: 70Mb L: 35/43 MS: 1 CrossOver- 00:07:32.920 [2024-05-13 02:48:23.554445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:90903a90 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.554470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.920 #40 NEW cov: 12070 ft: 15194 corp: 21/568b lim: 45 exec/s: 40 rss: 70Mb L: 9/43 MS: 2 ChangeByte-CrossOver- 00:07:32.920 [2024-05-13 02:48:23.594982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f80af8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.595006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.595059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.595072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.595125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.595138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.595190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.595203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.920 #41 NEW cov: 12070 ft: 15198 corp: 22/610b lim: 45 exec/s: 41 rss: 70Mb L: 42/43 MS: 1 CopyPart- 00:07:32.920 [2024-05-13 02:48:23.635121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f80af8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.635146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.635201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.635215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.635268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.635281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.635340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.635353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.920 #42 NEW cov: 12070 ft: 15215 corp: 23/654b lim: 45 exec/s: 42 rss: 70Mb L: 44/44 MS: 1 CopyPart- 00:07:32.920 [2024-05-13 02:48:23.675067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.675093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.675164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.675178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.920 [2024-05-13 02:48:23.675232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.920 [2024-05-13 02:48:23.675245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.920 #43 NEW cov: 12070 ft: 15228 corp: 24/689b lim: 45 exec/s: 43 rss: 70Mb L: 35/44 MS: 1 CrossOver- 00:07:32.921 [2024-05-13 02:48:23.714892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a7e0a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.921 [2024-05-13 02:48:23.714916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.179 #44 NEW cov: 12070 ft: 15242 corp: 25/700b lim: 45 exec/s: 44 rss: 70Mb L: 11/44 MS: 1 ChangeByte- 00:07:33.179 [2024-05-13 02:48:23.755186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3a903a90 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.755211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.755265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909024 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.755279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.179 #50 NEW cov: 12070 ft: 15243 corp: 26/718b lim: 45 exec/s: 50 rss: 70Mb L: 18/44 MS: 1 CopyPart- 00:07:33.179 [2024-05-13 02:48:23.805461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.805485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.805538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.805552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.805602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.805632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.179 #51 NEW cov: 12070 ft: 15283 corp: 27/750b lim: 45 exec/s: 51 rss: 71Mb L: 32/44 MS: 1 CopyPart- 00:07:33.179 [2024-05-13 02:48:23.855479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.855508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.855561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.855574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.179 #52 NEW cov: 12070 ft: 15367 corp: 28/773b lim: 45 exec/s: 52 rss: 71Mb L: 23/44 MS: 1 ChangeBit- 00:07:33.179 [2024-05-13 02:48:23.895773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a7e0a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.895798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.895853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0a90902b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.895866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.895921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.895934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.179 #53 NEW cov: 12070 ft: 15374 corp: 29/805b lim: 45 exec/s: 53 rss: 71Mb L: 32/44 MS: 1 CrossOver- 00:07:33.179 [2024-05-13 02:48:23.936046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f80af8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.936070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.936124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.936138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.936190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.936203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.179 [2024-05-13 02:48:23.936255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.179 [2024-05-13 02:48:23.936268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.179 #54 NEW cov: 12070 ft: 15380 corp: 30/849b lim: 45 exec/s: 54 rss: 71Mb L: 44/44 MS: 1 ChangeBit- 00:07:33.438 [2024-05-13 02:48:23.985905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a2b0a2b cdw11:0f900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:23.985931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:23.985988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:23.986001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:23.986054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:23.986071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 #55 NEW cov: 12070 ft: 15398 corp: 31/881b lim: 45 exec/s: 55 rss: 71Mb L: 32/44 MS: 1 CrossOver- 00:07:33.438 [2024-05-13 02:48:24.036076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f52b0ad2 cdw11:0f900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.036100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.036156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.036169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.036222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.036235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 #56 NEW cov: 12070 ft: 15411 corp: 32/913b lim: 45 exec/s: 56 rss: 71Mb L: 32/44 MS: 1 ChangeBinInt- 00:07:33.438 [2024-05-13 02:48:24.076393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f8fcf8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.076418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.076474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.076487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.076540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.076552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.076606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.076618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.438 #57 NEW cov: 12070 ft: 15418 corp: 33/956b lim: 45 exec/s: 57 rss: 71Mb L: 43/44 MS: 1 CopyPart- 00:07:33.438 [2024-05-13 02:48:24.126527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.126552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.126620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.126634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.126687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:9090f8f8 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.126700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.126752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.126769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.438 #58 NEW cov: 12070 ft: 15423 corp: 34/996b lim: 45 exec/s: 58 rss: 71Mb L: 40/44 MS: 1 CopyPart- 00:07:33.438 [2024-05-13 02:48:24.176642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.176666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.176722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:26900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.176735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.176788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f890f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.438 [2024-05-13 02:48:24.176801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.438 [2024-05-13 02:48:24.176852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.439 [2024-05-13 02:48:24.176864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.439 #59 NEW cov: 12070 ft: 15428 corp: 35/1032b lim: 45 exec/s: 59 rss: 71Mb L: 36/44 MS: 1 InsertByte- 00:07:33.439 [2024-05-13 02:48:24.216587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a902b0a cdw11:92900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.439 [2024-05-13 02:48:24.216612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.439 [2024-05-13 02:48:24.216668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.439 [2024-05-13 02:48:24.216682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.439 [2024-05-13 02:48:24.216738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:24900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.439 [2024-05-13 02:48:24.216751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.439 #60 NEW cov: 12070 ft: 15438 corp: 36/1060b lim: 45 exec/s: 60 rss: 71Mb L: 28/44 MS: 1 ChangeBit- 00:07:33.699 [2024-05-13 02:48:24.256879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:f8f80af8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.256903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.256957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.256970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.257021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.257034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.257087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:f8f8f8f8 cdw11:f8f80007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.257105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.699 #61 NEW cov: 12070 ft: 15458 corp: 37/1102b lim: 45 exec/s: 61 rss: 71Mb L: 42/44 MS: 1 ShuffleBytes- 00:07:33.699 [2024-05-13 02:48:24.306857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.306882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.306937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:90900003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.306950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.307002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:900a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.307015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.699 #62 NEW cov: 12070 ft: 15462 corp: 38/1134b lim: 45 exec/s: 62 rss: 71Mb L: 32/44 MS: 1 ChangeBinInt- 00:07:33.699 [2024-05-13 02:48:24.347287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0a900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.347312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.347367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90908d90 cdw11:90900003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.347384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.347435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:90909090 cdw11:900a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.347464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.347516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:9090907a cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.347529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.347579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:90900a90 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.347592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.699 #63 NEW cov: 12070 ft: 15513 corp: 39/1179b lim: 45 exec/s: 63 rss: 71Mb L: 45/45 MS: 1 CopyPart- 00:07:33.699 [2024-05-13 02:48:24.387242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f900a2b cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.387266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.387338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:90909090 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.387352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.387409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.387426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.699 [2024-05-13 02:48:24.387478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:90909090 cdw11:90900004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.699 [2024-05-13 02:48:24.387492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.699 #64 pulse cov: 12070 ft: 15522 corp: 39/1179b lim: 45 exec/s: 32 rss: 71Mb 00:07:33.699 #64 NEW cov: 12070 ft: 15522 corp: 40/1223b lim: 45 exec/s: 32 rss: 71Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:07:33.699 #64 DONE cov: 12070 ft: 15522 corp: 40/1223b lim: 45 exec/s: 32 rss: 71Mb 00:07:33.699 ###### Recommended dictionary. ###### 00:07:33.699 "\377\036" # Uses: 0 00:07:33.699 ###### End of recommended dictionary. ###### 00:07:33.699 Done 64 runs in 2 second(s) 00:07:33.699 [2024-05-13 02:48:24.416176] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:33.958 02:48:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:33.958 [2024-05-13 02:48:24.575922] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:33.958 [2024-05-13 02:48:24.575993] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500167 ] 00:07:33.958 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.217 [2024-05-13 02:48:24.793668] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.217 [2024-05-13 02:48:24.832302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.217 [2024-05-13 02:48:24.861555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.217 [2024-05-13 02:48:24.913753] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.217 [2024-05-13 02:48:24.929719] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:34.217 [2024-05-13 02:48:24.930115] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:34.217 INFO: Running with entropic power schedule (0xFF, 100). 00:07:34.217 INFO: Seed: 3689263600 00:07:34.217 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:34.217 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:34.217 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:34.217 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.217 #2 INITED exec/s: 0 rss: 63Mb 00:07:34.217 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.217 This may also happen if the target rejected all inputs we tried so far 00:07:34.217 [2024-05-13 02:48:24.985301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.217 [2024-05-13 02:48:24.985327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.476 NEW_FUNC[1/684]: 0x4ae770 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:34.476 NEW_FUNC[2/684]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:34.476 #3 NEW cov: 11743 ft: 11744 corp: 2/3b lim: 10 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CopyPart- 00:07:34.735 [2024-05-13 02:48:25.296089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.296126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 #4 NEW cov: 11873 ft: 12520 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:07:34.735 [2024-05-13 02:48:25.336085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.336111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 #5 NEW cov: 11879 ft: 12704 corp: 4/8b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 InsertByte- 00:07:34.735 [2024-05-13 02:48:25.376254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.376279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 #6 NEW cov: 11964 ft: 12975 corp: 5/11b lim: 10 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 InsertByte- 00:07:34.735 [2024-05-13 02:48:25.416350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.416375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 #7 NEW cov: 11964 ft: 13085 corp: 6/13b lim: 10 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 ShuffleBytes- 00:07:34.735 [2024-05-13 02:48:25.456445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001a1a cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.456471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 #10 NEW cov: 11964 ft: 13141 corp: 7/15b lim: 10 exec/s: 0 rss: 70Mb L: 2/3 MS: 3 ChangeBinInt-ChangeByte-CopyPart- 00:07:34.735 [2024-05-13 02:48:25.486568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001e1a cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.486594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 #11 NEW cov: 11964 ft: 13184 corp: 8/17b lim: 10 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 ChangeBit- 00:07:34.735 [2024-05-13 02:48:25.526780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.526805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.735 [2024-05-13 02:48:25.526858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:34.735 [2024-05-13 02:48:25.526871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.995 #12 NEW cov: 11964 ft: 13392 corp: 9/22b lim: 10 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:07:34.995 [2024-05-13 02:48:25.566869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000640a cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.566893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.995 [2024-05-13 02:48:25.566943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000400a cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.566957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.995 #13 NEW cov: 11964 ft: 13428 corp: 10/27b lim: 10 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:34.995 [2024-05-13 02:48:25.606837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a66 cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.606861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.995 #14 NEW cov: 11964 ft: 13475 corp: 11/29b lim: 10 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeBit- 00:07:34.995 [2024-05-13 02:48:25.646964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e0a cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.646990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.995 #15 NEW cov: 11964 ft: 13521 corp: 12/31b lim: 10 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:07:34.995 [2024-05-13 02:48:25.677115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.677140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.995 #16 NEW cov: 11964 ft: 13524 corp: 13/34b lim: 10 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:07:34.995 [2024-05-13 02:48:25.717302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.717328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.995 [2024-05-13 02:48:25.717402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007987 cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.717416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.995 #17 NEW cov: 11964 ft: 13571 corp: 14/38b lim: 10 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 InsertByte- 00:07:34.995 [2024-05-13 02:48:25.767334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe7a cdw11:00000000 00:07:34.995 [2024-05-13 02:48:25.767360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.995 #21 NEW cov: 11964 ft: 13616 corp: 15/40b lim: 10 exec/s: 0 rss: 71Mb L: 2/5 MS: 4 EraseBytes-ChangeBit-ShuffleBytes-InsertByte- 00:07:35.254 [2024-05-13 02:48:25.807839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.807867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.254 [2024-05-13 02:48:25.807921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.807934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.254 [2024-05-13 02:48:25.807987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.808000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.254 [2024-05-13 02:48:25.808052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007987 cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.808065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.254 #22 NEW cov: 11964 ft: 13900 corp: 16/48b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:35.254 [2024-05-13 02:48:25.857736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e0a cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.857762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.254 [2024-05-13 02:48:25.857814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007e0a cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.857828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.254 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:35.254 #23 NEW cov: 11987 ft: 13963 corp: 17/52b lim: 10 exec/s: 0 rss: 71Mb L: 4/8 MS: 1 CopyPart- 00:07:35.254 [2024-05-13 02:48:25.897708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000564 cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.897733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.254 #24 NEW cov: 11987 ft: 14041 corp: 18/54b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeByte- 00:07:35.254 [2024-05-13 02:48:25.937845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003f0a cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.937870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.254 #25 NEW cov: 11987 ft: 14050 corp: 19/56b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 InsertByte- 00:07:35.254 [2024-05-13 02:48:25.968165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.968189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.254 [2024-05-13 02:48:25.968243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.254 [2024-05-13 02:48:25.968256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.255 [2024-05-13 02:48:25.968307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ee87 cdw11:00000000 00:07:35.255 [2024-05-13 02:48:25.968320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.255 #26 NEW cov: 11987 ft: 14195 corp: 20/62b lim: 10 exec/s: 26 rss: 71Mb L: 6/8 MS: 1 InsertRepeatedBytes- 00:07:35.255 [2024-05-13 02:48:26.008299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.255 [2024-05-13 02:48:26.008325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.255 [2024-05-13 02:48:26.008385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.255 [2024-05-13 02:48:26.008399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.255 [2024-05-13 02:48:26.008449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.255 [2024-05-13 02:48:26.008462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.255 #27 NEW cov: 11987 ft: 14205 corp: 21/69b lim: 10 exec/s: 27 rss: 71Mb L: 7/8 MS: 1 CrossOver- 00:07:35.255 [2024-05-13 02:48:26.048127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:07:35.255 [2024-05-13 02:48:26.048151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 #28 NEW cov: 11987 ft: 14269 corp: 22/71b lim: 10 exec/s: 28 rss: 72Mb L: 2/8 MS: 1 ChangeBit- 00:07:35.514 [2024-05-13 02:48:26.088292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a1f cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.088318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 #29 NEW cov: 11987 ft: 14278 corp: 23/73b lim: 10 exec/s: 29 rss: 72Mb L: 2/8 MS: 1 InsertByte- 00:07:35.514 [2024-05-13 02:48:26.118727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a04 cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.118752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 [2024-05-13 02:48:26.118805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000404 cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.118818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.514 [2024-05-13 02:48:26.118869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000aee cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.118882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.514 [2024-05-13 02:48:26.118935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000eeee cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.118948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.514 #30 NEW cov: 11987 ft: 14358 corp: 24/82b lim: 10 exec/s: 30 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:35.514 [2024-05-13 02:48:26.158507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.158532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 #31 NEW cov: 11987 ft: 14409 corp: 25/84b lim: 10 exec/s: 31 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:07:35.514 [2024-05-13 02:48:26.198578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a7e cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.198603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 #32 NEW cov: 11987 ft: 14420 corp: 26/86b lim: 10 exec/s: 32 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:35.514 [2024-05-13 02:48:26.238688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000870a cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.238712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 #33 NEW cov: 11987 ft: 14446 corp: 27/89b lim: 10 exec/s: 33 rss: 72Mb L: 3/9 MS: 1 ShuffleBytes- 00:07:35.514 [2024-05-13 02:48:26.278817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.278842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.514 #34 NEW cov: 11987 ft: 14465 corp: 28/92b lim: 10 exec/s: 34 rss: 72Mb L: 3/9 MS: 1 ChangeByte- 00:07:35.514 [2024-05-13 02:48:26.308879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000009fe cdw11:00000000 00:07:35.514 [2024-05-13 02:48:26.308904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.773 #35 NEW cov: 11987 ft: 14472 corp: 29/94b lim: 10 exec/s: 35 rss: 72Mb L: 2/9 MS: 1 ChangeBinInt- 00:07:35.773 [2024-05-13 02:48:26.349020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004a0a cdw11:00000000 00:07:35.773 [2024-05-13 02:48:26.349044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 #36 NEW cov: 11987 ft: 14487 corp: 30/96b lim: 10 exec/s: 36 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:07:35.774 [2024-05-13 02:48:26.389112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000544 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.389136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 #37 NEW cov: 11987 ft: 14504 corp: 31/98b lim: 10 exec/s: 37 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:07:35.774 [2024-05-13 02:48:26.429331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007eff cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.429356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 [2024-05-13 02:48:26.429412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.429425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.774 #38 NEW cov: 11987 ft: 14506 corp: 32/103b lim: 10 exec/s: 38 rss: 72Mb L: 5/9 MS: 1 InsertRepeatedBytes- 00:07:35.774 [2024-05-13 02:48:26.469612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.469636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 [2024-05-13 02:48:26.469690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.469703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.774 [2024-05-13 02:48:26.469751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ee87 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.469765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.774 #39 NEW cov: 11987 ft: 14508 corp: 33/109b lim: 10 exec/s: 39 rss: 72Mb L: 6/9 MS: 1 CrossOver- 00:07:35.774 [2024-05-13 02:48:26.509683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.509707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 [2024-05-13 02:48:26.509759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.509772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.774 [2024-05-13 02:48:26.509827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000079 cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.509841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.774 #40 NEW cov: 11987 ft: 14532 corp: 34/116b lim: 10 exec/s: 40 rss: 72Mb L: 7/9 MS: 1 EraseBytes- 00:07:35.774 [2024-05-13 02:48:26.549527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:35.774 [2024-05-13 02:48:26.549552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.774 #41 NEW cov: 11987 ft: 14543 corp: 35/119b lim: 10 exec/s: 41 rss: 72Mb L: 3/9 MS: 1 InsertByte- 00:07:36.034 [2024-05-13 02:48:26.579663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000564 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.579689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 #42 NEW cov: 11987 ft: 14548 corp: 36/121b lim: 10 exec/s: 42 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:36.034 [2024-05-13 02:48:26.619937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a66 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.619961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.620016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b4b4 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.620030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.620083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000b4b4 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.620096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.034 #43 NEW cov: 11987 ft: 14557 corp: 37/128b lim: 10 exec/s: 43 rss: 72Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:07:36.034 [2024-05-13 02:48:26.659933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.659957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.660007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a79 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.660020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 #44 NEW cov: 11987 ft: 14562 corp: 38/132b lim: 10 exec/s: 44 rss: 72Mb L: 4/9 MS: 1 CopyPart- 00:07:36.034 [2024-05-13 02:48:26.699956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000564 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.699981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 #45 NEW cov: 11987 ft: 14566 corp: 39/134b lim: 10 exec/s: 45 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:07:36.034 [2024-05-13 02:48:26.730412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.730437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.730488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.730501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.730550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ee45 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.730566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.730614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004545 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.730626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.034 #46 NEW cov: 11987 ft: 14573 corp: 40/143b lim: 10 exec/s: 46 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:36.034 [2024-05-13 02:48:26.770669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000500 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.770693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.770744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.770757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.770808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.770821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.770872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.770885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.770937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000044 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.770949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.034 #47 NEW cov: 11987 ft: 14632 corp: 41/153b lim: 10 exec/s: 47 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:36.034 [2024-05-13 02:48:26.810448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001164 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.810473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.034 [2024-05-13 02:48:26.810526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:36.034 [2024-05-13 02:48:26.810539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.294 [2024-05-13 02:48:26.850569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001164 cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.850593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.294 [2024-05-13 02:48:26.850662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.850676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.294 #49 NEW cov: 11987 ft: 14634 corp: 42/158b lim: 10 exec/s: 49 rss: 73Mb L: 5/10 MS: 2 ChangeBinInt-CopyPart- 00:07:36.294 [2024-05-13 02:48:26.890666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0b cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.890690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.294 [2024-05-13 02:48:26.890740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007987 cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.890757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.294 #50 NEW cov: 11987 ft: 14641 corp: 43/162b lim: 10 exec/s: 50 rss: 73Mb L: 4/10 MS: 1 ChangeBit- 00:07:36.294 [2024-05-13 02:48:26.930671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.930695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.294 #51 NEW cov: 11987 ft: 14652 corp: 44/165b lim: 10 exec/s: 51 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:07:36.294 [2024-05-13 02:48:26.970877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007eff cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.970902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.294 [2024-05-13 02:48:26.970952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff2b cdw11:00000000 00:07:36.294 [2024-05-13 02:48:26.970965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.294 #52 NEW cov: 11987 ft: 14674 corp: 45/170b lim: 10 exec/s: 26 rss: 73Mb L: 5/10 MS: 1 ChangeByte- 00:07:36.294 #52 DONE cov: 11987 ft: 14674 corp: 45/170b lim: 10 exec/s: 26 rss: 73Mb 00:07:36.294 Done 52 runs in 2 second(s) 00:07:36.294 [2024-05-13 02:48:26.999101] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.553 02:48:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:36.553 [2024-05-13 02:48:27.161456] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:36.553 [2024-05-13 02:48:27.161543] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500510 ] 00:07:36.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.813 [2024-05-13 02:48:27.371808] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.813 [2024-05-13 02:48:27.411350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.813 [2024-05-13 02:48:27.440853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.813 [2024-05-13 02:48:27.493420] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.813 [2024-05-13 02:48:27.509368] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:36.813 [2024-05-13 02:48:27.509781] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:36.813 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.813 INFO: Seed: 1972331525 00:07:36.813 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:36.813 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:36.813 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:36.813 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.813 #2 INITED exec/s: 0 rss: 63Mb 00:07:36.813 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.813 This may also happen if the target rejected all inputs we tried so far 00:07:36.813 [2024-05-13 02:48:27.558848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:36.813 [2024-05-13 02:48:27.558877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.813 [2024-05-13 02:48:27.558943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b20a cdw11:00000000 00:07:36.813 [2024-05-13 02:48:27.558958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.072 NEW_FUNC[1/684]: 0x4af160 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:37.072 NEW_FUNC[2/684]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:37.072 #3 NEW cov: 11743 ft: 11744 corp: 2/5b lim: 10 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:37.332 [2024-05-13 02:48:27.890496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.890547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.332 #4 NEW cov: 11873 ft: 12577 corp: 3/7b lim: 10 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:07:37.332 [2024-05-13 02:48:27.941158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.941185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.941305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.941323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.941437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.941454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.941570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.941593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.332 #5 NEW cov: 11879 ft: 13071 corp: 4/15b lim: 10 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 CopyPart- 00:07:37.332 [2024-05-13 02:48:27.981235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.981263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.981384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.981402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.981515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.981533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.981648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.981665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.332 [2024-05-13 02:48:27.981776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:27.981793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.332 #8 NEW cov: 11964 ft: 13310 corp: 5/25b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 3 CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:07:37.332 [2024-05-13 02:48:28.030716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003838 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:28.030742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.332 #11 NEW cov: 11964 ft: 13544 corp: 6/27b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 3 ShuffleBytes-ChangeByte-CopyPart- 00:07:37.332 [2024-05-13 02:48:28.081023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.332 [2024-05-13 02:48:28.081048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.332 #12 NEW cov: 11964 ft: 13615 corp: 7/29b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:37.591 [2024-05-13 02:48:28.141138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003897 cdw11:00000000 00:07:37.591 [2024-05-13 02:48:28.141166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.591 #13 NEW cov: 11964 ft: 13798 corp: 8/32b lim: 10 exec/s: 0 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:37.591 [2024-05-13 02:48:28.191555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.591 [2024-05-13 02:48:28.191587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.591 [2024-05-13 02:48:28.191701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.591 [2024-05-13 02:48:28.191720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.191839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.191856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.191974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.191990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.592 #14 NEW cov: 11964 ft: 13862 corp: 9/40b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 CrossOver- 00:07:37.592 [2024-05-13 02:48:28.241971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.241996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.242105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.242120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.242230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.242249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.242367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000580a cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.242386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.592 #15 NEW cov: 11964 ft: 13934 corp: 10/48b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 ChangeByte- 00:07:37.592 [2024-05-13 02:48:28.291659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.291686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.291798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.291815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.592 #16 NEW cov: 11964 ft: 14019 corp: 11/52b lim: 10 exec/s: 0 rss: 70Mb L: 4/10 MS: 1 CopyPart- 00:07:37.592 [2024-05-13 02:48:28.331196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.331224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.592 #17 NEW cov: 11964 ft: 14097 corp: 12/55b lim: 10 exec/s: 0 rss: 70Mb L: 3/10 MS: 1 CrossOver- 00:07:37.592 [2024-05-13 02:48:28.382470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.382495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.382613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.382632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.382752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.382769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.592 [2024-05-13 02:48:28.382882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:37.592 [2024-05-13 02:48:28.382901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.851 #18 NEW cov: 11964 ft: 14157 corp: 13/63b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 ShuffleBytes- 00:07:37.851 [2024-05-13 02:48:28.422177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.422203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.422311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.422338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.422452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a38 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.422469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.422584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.422602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.851 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:37.851 #19 NEW cov: 11987 ft: 14197 corp: 14/71b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 CrossOver- 00:07:37.851 [2024-05-13 02:48:28.472258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.472284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.472405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000032b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.472435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.472550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.472569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.472682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.472700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.851 #20 NEW cov: 11987 ft: 14288 corp: 15/79b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 ChangeBit- 00:07:37.851 [2024-05-13 02:48:28.522457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.522484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.522613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b20a cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.522630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.851 #21 NEW cov: 11987 ft: 14308 corp: 16/84b lim: 10 exec/s: 21 rss: 70Mb L: 5/10 MS: 1 InsertByte- 00:07:37.851 [2024-05-13 02:48:28.562754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b284 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.562782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.562895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008484 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.562918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.563038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008484 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.563054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.563173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008438 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.563192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.851 #22 NEW cov: 11987 ft: 14324 corp: 17/93b lim: 10 exec/s: 22 rss: 70Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:37.851 [2024-05-13 02:48:28.613132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.613158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.613275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.613291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.613407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.613426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.613549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003f0a cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.613565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.851 #23 NEW cov: 11987 ft: 14332 corp: 18/101b lim: 10 exec/s: 23 rss: 70Mb L: 8/10 MS: 1 ChangeByte- 00:07:37.851 [2024-05-13 02:48:28.653246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.653273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.653396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.653414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.653528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffb2 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.653545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.851 [2024-05-13 02:48:28.653663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003838 cdw11:00000000 00:07:37.851 [2024-05-13 02:48:28.653681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.110 #24 NEW cov: 11987 ft: 14351 corp: 19/110b lim: 10 exec/s: 24 rss: 70Mb L: 9/10 MS: 1 InsertByte- 00:07:38.110 [2024-05-13 02:48:28.692804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.110 [2024-05-13 02:48:28.692834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.110 [2024-05-13 02:48:28.692944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.692963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.111 #25 NEW cov: 11987 ft: 14423 corp: 20/114b lim: 10 exec/s: 25 rss: 71Mb L: 4/10 MS: 1 CopyPart- 00:07:38.111 [2024-05-13 02:48:28.752916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003897 cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.752945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.111 #26 NEW cov: 11987 ft: 14438 corp: 21/116b lim: 10 exec/s: 26 rss: 71Mb L: 2/10 MS: 1 EraseBytes- 00:07:38.111 [2024-05-13 02:48:28.813735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.813764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.111 [2024-05-13 02:48:28.813889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000032b2 cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.813907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.111 [2024-05-13 02:48:28.814020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003f38 cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.814038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.111 [2024-05-13 02:48:28.814153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.814170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.111 #27 NEW cov: 11987 ft: 14451 corp: 22/124b lim: 10 exec/s: 27 rss: 71Mb L: 8/10 MS: 1 ChangeByte- 00:07:38.111 [2024-05-13 02:48:28.873290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003827 cdw11:00000000 00:07:38.111 [2024-05-13 02:48:28.873317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.111 #29 NEW cov: 11987 ft: 14474 corp: 23/126b lim: 10 exec/s: 29 rss: 71Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:07:38.370 [2024-05-13 02:48:28.933442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:38.370 [2024-05-13 02:48:28.933470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.370 #30 NEW cov: 11987 ft: 14476 corp: 24/128b lim: 10 exec/s: 30 rss: 71Mb L: 2/10 MS: 1 CopyPart- 00:07:38.370 [2024-05-13 02:48:28.973934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:28.973961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:28.974083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:28.974101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:28.974218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:28.974235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:28.974361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:28.974384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:28.974497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000380a cdw11:00000000 00:07:38.370 [2024-05-13 02:48:28.974520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.370 #31 NEW cov: 11987 ft: 14490 corp: 25/138b lim: 10 exec/s: 31 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:07:38.370 [2024-05-13 02:48:29.024273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.024300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:29.024414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.024432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:29.024547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a38 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.024564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:29.024668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003827 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.024687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.370 #32 NEW cov: 11987 ft: 14502 corp: 26/146b lim: 10 exec/s: 32 rss: 71Mb L: 8/10 MS: 1 CrossOver- 00:07:38.370 [2024-05-13 02:48:29.083969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.083994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.370 [2024-05-13 02:48:29.084107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b20a cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.084126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.370 #33 NEW cov: 11987 ft: 14524 corp: 27/150b lim: 10 exec/s: 33 rss: 71Mb L: 4/10 MS: 1 CrossOver- 00:07:38.370 [2024-05-13 02:48:29.133957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004ec7 cdw11:00000000 00:07:38.370 [2024-05-13 02:48:29.133983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.370 #34 NEW cov: 11987 ft: 14540 corp: 28/153b lim: 10 exec/s: 34 rss: 71Mb L: 3/10 MS: 1 ChangeBinInt- 00:07:38.630 [2024-05-13 02:48:29.184748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.184776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.184888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003838 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.184906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.185016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.185035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.185149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.185168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.630 #35 NEW cov: 11987 ft: 14541 corp: 29/161b lim: 10 exec/s: 35 rss: 71Mb L: 8/10 MS: 1 ShuffleBytes- 00:07:38.630 [2024-05-13 02:48:29.235205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00004e6c cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.235236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.235358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006c6c cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.235378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.235493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006c6c cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.235509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.235613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006c6c cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.235632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.235741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000c797 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.235757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.630 #36 NEW cov: 11987 ft: 14569 corp: 30/171b lim: 10 exec/s: 36 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:38.630 [2024-05-13 02:48:29.284662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.284689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.284800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.284817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.630 #37 NEW cov: 11987 ft: 14581 corp: 31/175b lim: 10 exec/s: 37 rss: 71Mb L: 4/10 MS: 1 CopyPart- 00:07:38.630 [2024-05-13 02:48:29.325202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.325229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.325339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000032b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.325357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.325481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003f38 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.325496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.325603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.325621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.630 #38 NEW cov: 11987 ft: 14647 corp: 32/184b lim: 10 exec/s: 38 rss: 71Mb L: 9/10 MS: 1 CopyPart- 00:07:38.630 [2024-05-13 02:48:29.375153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.375179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.375296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b298 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.375318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.630 [2024-05-13 02:48:29.375443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009898 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.375462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.630 #39 NEW cov: 11987 ft: 14775 corp: 33/191b lim: 10 exec/s: 39 rss: 71Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:38.630 [2024-05-13 02:48:29.414848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.630 [2024-05-13 02:48:29.414876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.890 #40 NEW cov: 11987 ft: 14786 corp: 34/193b lim: 10 exec/s: 40 rss: 71Mb L: 2/10 MS: 1 EraseBytes- 00:07:38.890 [2024-05-13 02:48:29.455153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b225 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.455179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.455291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000032b2 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.455308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.455436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b238 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.455469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.455580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000380a cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.455596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.890 #41 NEW cov: 11987 ft: 14835 corp: 35/201b lim: 10 exec/s: 41 rss: 72Mb L: 8/10 MS: 1 ChangeByte- 00:07:38.890 [2024-05-13 02:48:29.495937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.495963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.496077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.496105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.496213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffb2 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.496231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.496346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003838 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.496363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.496483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a29 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.496501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.890 #42 NEW cov: 11987 ft: 14836 corp: 36/211b lim: 10 exec/s: 42 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:07:38.890 [2024-05-13 02:48:29.545347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.545377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.890 [2024-05-13 02:48:29.545504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000032b2 cdw11:00000000 00:07:38.890 [2024-05-13 02:48:29.545521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.890 #43 NEW cov: 11987 ft: 14874 corp: 37/216b lim: 10 exec/s: 21 rss: 72Mb L: 5/10 MS: 1 CrossOver- 00:07:38.890 #43 DONE cov: 11987 ft: 14874 corp: 37/216b lim: 10 exec/s: 21 rss: 72Mb 00:07:38.890 Done 43 runs in 2 second(s) 00:07:38.890 [2024-05-13 02:48:29.572611] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:38.890 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.149 02:48:29 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:39.149 [2024-05-13 02:48:29.733817] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:39.149 [2024-05-13 02:48:29.733883] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500991 ] 00:07:39.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.149 [2024-05-13 02:48:29.948314] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.409 [2024-05-13 02:48:29.987241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.409 [2024-05-13 02:48:30.017741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.409 [2024-05-13 02:48:30.070433] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.409 [2024-05-13 02:48:30.086376] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:39.409 [2024-05-13 02:48:30.086784] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:39.409 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.409 INFO: Seed: 256364903 00:07:39.409 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:39.409 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:39.409 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:39.409 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.409 [2024-05-13 02:48:30.152056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.409 [2024-05-13 02:48:30.152086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.409 #2 INITED cov: 11771 ft: 11772 corp: 1/1b exec/s: 0 rss: 69Mb 00:07:39.409 [2024-05-13 02:48:30.192039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.409 [2024-05-13 02:48:30.192065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 #3 NEW cov: 11901 ft: 12338 corp: 2/2b lim: 5 exec/s: 0 rss: 69Mb L: 1/1 MS: 1 ShuffleBytes- 00:07:39.669 [2024-05-13 02:48:30.242202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.242229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 #4 NEW cov: 11907 ft: 12463 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeByte- 00:07:39.669 [2024-05-13 02:48:30.282513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.282539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.282599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.282613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.669 #5 NEW cov: 11992 ft: 13424 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:07:39.669 [2024-05-13 02:48:30.322912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.322937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.323009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.323023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.323076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.323089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.323144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.323158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.669 #6 NEW cov: 11992 ft: 13808 corp: 5/9b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:39.669 [2024-05-13 02:48:30.362748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.362773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.362845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.362859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.669 #7 NEW cov: 11992 ft: 13856 corp: 6/11b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 InsertByte- 00:07:39.669 [2024-05-13 02:48:30.402668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.402693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 #8 NEW cov: 11992 ft: 14075 corp: 7/12b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeByte- 00:07:39.669 [2024-05-13 02:48:30.443056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.443081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.443139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.443153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.669 [2024-05-13 02:48:30.443208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.669 [2024-05-13 02:48:30.443221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.669 #9 NEW cov: 11992 ft: 14361 corp: 8/15b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 CrossOver- 00:07:39.929 [2024-05-13 02:48:30.492927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.492952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.929 #10 NEW cov: 11992 ft: 14410 corp: 9/16b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 CrossOver- 00:07:39.929 [2024-05-13 02:48:30.533168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.533194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.533249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.533263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.929 #11 NEW cov: 11992 ft: 14481 corp: 10/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:39.929 [2024-05-13 02:48:30.583521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.583555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.583632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.583646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.583702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.583715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.929 #12 NEW cov: 11992 ft: 14491 corp: 11/21b lim: 5 exec/s: 0 rss: 70Mb L: 3/4 MS: 1 ShuffleBytes- 00:07:39.929 [2024-05-13 02:48:30.633467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.633492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.633548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.633562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.929 #13 NEW cov: 11992 ft: 14512 corp: 12/23b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:39.929 [2024-05-13 02:48:30.673868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.673893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.673947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.673961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.674014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.674027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.929 [2024-05-13 02:48:30.674082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.674096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.929 #14 NEW cov: 11992 ft: 14560 corp: 13/27b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:07:39.929 [2024-05-13 02:48:30.723559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.929 [2024-05-13 02:48:30.723584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.189 #15 NEW cov: 11992 ft: 14633 corp: 14/28b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 CopyPart- 00:07:40.189 [2024-05-13 02:48:30.763657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.763682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.189 #16 NEW cov: 11992 ft: 14697 corp: 15/29b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:07:40.189 [2024-05-13 02:48:30.803916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.803942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.804000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.804014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.189 #17 NEW cov: 11992 ft: 14814 corp: 16/31b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 EraseBytes- 00:07:40.189 [2024-05-13 02:48:30.844330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.844356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.844414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.844428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.844480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.844493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.844547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.844560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.189 #18 NEW cov: 11992 ft: 14837 corp: 17/35b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:40.189 [2024-05-13 02:48:30.894482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.894507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.894563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.894577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.894631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.894644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.894700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.894713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.189 #19 NEW cov: 11992 ft: 14881 corp: 18/39b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:40.189 [2024-05-13 02:48:30.944335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.944359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.189 [2024-05-13 02:48:30.944435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.189 [2024-05-13 02:48:30.944449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.189 #20 NEW cov: 11992 ft: 14886 corp: 19/41b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:40.466 [2024-05-13 02:48:30.994321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.466 [2024-05-13 02:48:30.994346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:40.785 #21 NEW cov: 12015 ft: 14896 corp: 20/42b lim: 5 exec/s: 21 rss: 71Mb L: 1/4 MS: 1 CrossOver- 00:07:40.785 [2024-05-13 02:48:31.316072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.316140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.316249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.316281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.316374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.316414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.316512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.316543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.785 #22 NEW cov: 12015 ft: 15094 corp: 21/46b lim: 5 exec/s: 22 rss: 72Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:40.785 [2024-05-13 02:48:31.375731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.375757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.375816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.375830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.375883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.375896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.375948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.375962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.785 #23 NEW cov: 12015 ft: 15105 corp: 22/50b lim: 5 exec/s: 23 rss: 72Mb L: 4/4 MS: 1 ChangeByte- 00:07:40.785 [2024-05-13 02:48:31.415829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.415855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.415925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.415939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.415996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.416009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.416065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.416078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.785 #24 NEW cov: 12015 ft: 15115 corp: 23/54b lim: 5 exec/s: 24 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:40.785 [2024-05-13 02:48:31.455804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.455829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.455889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.455902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.455960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.455973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.785 #25 NEW cov: 12015 ft: 15124 corp: 24/57b lim: 5 exec/s: 25 rss: 72Mb L: 3/4 MS: 1 CrossOver- 00:07:40.785 [2024-05-13 02:48:31.496064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.496090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.496148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.496161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.496215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.496228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.496283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.496296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.785 #26 NEW cov: 12015 ft: 15129 corp: 25/61b lim: 5 exec/s: 26 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:07:40.785 [2024-05-13 02:48:31.546079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.546104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.546164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.546178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.785 [2024-05-13 02:48:31.546233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.785 [2024-05-13 02:48:31.546246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.785 #27 NEW cov: 12015 ft: 15140 corp: 26/64b lim: 5 exec/s: 27 rss: 72Mb L: 3/4 MS: 1 ChangeByte- 00:07:41.045 [2024-05-13 02:48:31.596130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.596156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.596214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.596228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.045 #28 NEW cov: 12015 ft: 15162 corp: 27/66b lim: 5 exec/s: 28 rss: 72Mb L: 2/4 MS: 1 ChangeByte- 00:07:41.045 [2024-05-13 02:48:31.636196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.636221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.636278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.636291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.045 #29 NEW cov: 12015 ft: 15186 corp: 28/68b lim: 5 exec/s: 29 rss: 72Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:41.045 [2024-05-13 02:48:31.686653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.686679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.686735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.686748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.686802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.686815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.686871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.686887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.045 #30 NEW cov: 12015 ft: 15199 corp: 29/72b lim: 5 exec/s: 30 rss: 72Mb L: 4/4 MS: 1 ChangeBit- 00:07:41.045 [2024-05-13 02:48:31.736797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.736823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.736879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.736893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.736945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.736958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.737013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.737026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.045 #31 NEW cov: 12015 ft: 15214 corp: 30/76b lim: 5 exec/s: 31 rss: 72Mb L: 4/4 MS: 1 InsertByte- 00:07:41.045 [2024-05-13 02:48:31.786653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.786679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.786735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.786749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.045 #32 NEW cov: 12015 ft: 15228 corp: 31/78b lim: 5 exec/s: 32 rss: 72Mb L: 2/4 MS: 1 ChangeBit- 00:07:41.045 [2024-05-13 02:48:31.836752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.836780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.045 [2024-05-13 02:48:31.836837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.045 [2024-05-13 02:48:31.836851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.305 #33 NEW cov: 12015 ft: 15240 corp: 32/80b lim: 5 exec/s: 33 rss: 72Mb L: 2/4 MS: 1 EraseBytes- 00:07:41.305 [2024-05-13 02:48:31.876961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.876987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.877045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.877059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.877137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.877151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.305 #34 NEW cov: 12015 ft: 15268 corp: 33/83b lim: 5 exec/s: 34 rss: 72Mb L: 3/4 MS: 1 EraseBytes- 00:07:41.305 [2024-05-13 02:48:31.916849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.916875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.305 #35 NEW cov: 12015 ft: 15282 corp: 34/84b lim: 5 exec/s: 35 rss: 72Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:41.305 [2024-05-13 02:48:31.957400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.957425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.957481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.957495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.957548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.957561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.957615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.957628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.305 #36 NEW cov: 12015 ft: 15288 corp: 35/88b lim: 5 exec/s: 36 rss: 72Mb L: 4/4 MS: 1 InsertByte- 00:07:41.305 [2024-05-13 02:48:31.997701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.997726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.997796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.997810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.997864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.997877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.997940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.997953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:31.998006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:31.998019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:41.305 #37 NEW cov: 12015 ft: 15342 corp: 36/93b lim: 5 exec/s: 37 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:07:41.305 [2024-05-13 02:48:32.037532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:32.037557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:32.037613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.305 [2024-05-13 02:48:32.037627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.305 [2024-05-13 02:48:32.037684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.306 [2024-05-13 02:48:32.037697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.306 #38 NEW cov: 12015 ft: 15374 corp: 37/96b lim: 5 exec/s: 38 rss: 72Mb L: 3/5 MS: 1 ChangeByte- 00:07:41.306 [2024-05-13 02:48:32.077617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.306 [2024-05-13 02:48:32.077642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.306 [2024-05-13 02:48:32.077698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.306 [2024-05-13 02:48:32.077712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.306 [2024-05-13 02:48:32.077766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.306 [2024-05-13 02:48:32.077779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.306 #39 NEW cov: 12015 ft: 15394 corp: 38/99b lim: 5 exec/s: 39 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:07:41.575 [2024-05-13 02:48:32.117576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.575 [2024-05-13 02:48:32.117602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.575 [2024-05-13 02:48:32.117672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.575 [2024-05-13 02:48:32.117686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.575 #40 NEW cov: 12015 ft: 15402 corp: 39/101b lim: 5 exec/s: 20 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:07:41.575 #40 DONE cov: 12015 ft: 15402 corp: 39/101b lim: 5 exec/s: 20 rss: 73Mb 00:07:41.575 Done 40 runs in 2 second(s) 00:07:41.575 [2024-05-13 02:48:32.146179] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.575 02:48:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:41.575 [2024-05-13 02:48:32.306630] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:41.575 [2024-05-13 02:48:32.306704] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501529 ] 00:07:41.575 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.835 [2024-05-13 02:48:32.516905] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.835 [2024-05-13 02:48:32.554901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.835 [2024-05-13 02:48:32.584367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.835 [2024-05-13 02:48:32.636944] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.094 [2024-05-13 02:48:32.652894] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:42.094 [2024-05-13 02:48:32.653284] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:42.094 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.094 INFO: Seed: 2821343494 00:07:42.094 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:42.094 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:42.094 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:42.094 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.094 [2024-05-13 02:48:32.720032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.094 [2024-05-13 02:48:32.720070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.094 #2 INITED cov: 11740 ft: 11770 corp: 1/1b exec/s: 0 rss: 69Mb 00:07:42.094 [2024-05-13 02:48:32.770115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.095 [2024-05-13 02:48:32.770141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.354 NEW_FUNC[1/4]: 0xf4e220 in rte_get_tsc_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/rte_cycles.h:61 00:07:42.354 NEW_FUNC[2/4]: 0x1a00ac0 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:528 00:07:42.354 #3 NEW cov: 11901 ft: 12411 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ShuffleBytes- 00:07:42.354 [2024-05-13 02:48:33.100859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.354 [2024-05-13 02:48:33.100894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.354 [2024-05-13 02:48:33.101026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.354 [2024-05-13 02:48:33.101046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.354 #4 NEW cov: 11907 ft: 13506 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:07:42.354 [2024-05-13 02:48:33.150649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.354 [2024-05-13 02:48:33.150679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.614 #5 NEW cov: 11992 ft: 13836 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeByte- 00:07:42.614 [2024-05-13 02:48:33.200815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.200845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.614 #6 NEW cov: 11992 ft: 13932 corp: 5/6b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:07:42.614 [2024-05-13 02:48:33.261846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.261875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.262016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.262035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.262173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.262191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.262321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.262339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.614 #7 NEW cov: 11992 ft: 14276 corp: 6/10b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:42.614 [2024-05-13 02:48:33.322018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.322046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.322186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.322208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.322328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.322348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.322474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.322492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.614 #8 NEW cov: 11992 ft: 14307 corp: 7/14b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:07:42.614 [2024-05-13 02:48:33.381674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.381703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.614 [2024-05-13 02:48:33.381838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.614 [2024-05-13 02:48:33.381858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.614 #9 NEW cov: 11992 ft: 14319 corp: 8/16b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 InsertByte- 00:07:42.873 [2024-05-13 02:48:33.431499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.873 [2024-05-13 02:48:33.431529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.873 #10 NEW cov: 11992 ft: 14402 corp: 9/17b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeBit- 00:07:42.873 [2024-05-13 02:48:33.481599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.873 [2024-05-13 02:48:33.481630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.874 #11 NEW cov: 11992 ft: 14460 corp: 10/18b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ChangeByte- 00:07:42.874 [2024-05-13 02:48:33.531786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.531816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.874 #12 NEW cov: 11992 ft: 14489 corp: 11/19b lim: 5 exec/s: 0 rss: 70Mb L: 1/4 MS: 1 ShuffleBytes- 00:07:42.874 [2024-05-13 02:48:33.582805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.582832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.874 [2024-05-13 02:48:33.582964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.582983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.874 [2024-05-13 02:48:33.583126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.583145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.874 [2024-05-13 02:48:33.583275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.583295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.874 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:42.874 #13 NEW cov: 12015 ft: 14528 corp: 12/23b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeByte- 00:07:42.874 [2024-05-13 02:48:33.642456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.642485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.874 [2024-05-13 02:48:33.642624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.874 [2024-05-13 02:48:33.642645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.874 #14 NEW cov: 12015 ft: 14535 corp: 13/25b lim: 5 exec/s: 0 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:43.131 [2024-05-13 02:48:33.692661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.692693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.132 [2024-05-13 02:48:33.692830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.692848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.132 #15 NEW cov: 12015 ft: 14579 corp: 14/27b lim: 5 exec/s: 15 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:43.132 [2024-05-13 02:48:33.742840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.742870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.132 [2024-05-13 02:48:33.743006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.743025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.132 #16 NEW cov: 12015 ft: 14594 corp: 15/29b lim: 5 exec/s: 16 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:43.132 [2024-05-13 02:48:33.802973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.803003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.132 [2024-05-13 02:48:33.803136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.803155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.132 #17 NEW cov: 12015 ft: 14625 corp: 16/31b lim: 5 exec/s: 17 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:07:43.132 [2024-05-13 02:48:33.853148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.853177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.132 [2024-05-13 02:48:33.853311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.853328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.132 #18 NEW cov: 12015 ft: 14646 corp: 17/33b lim: 5 exec/s: 18 rss: 70Mb L: 2/4 MS: 1 ChangeBit- 00:07:43.132 [2024-05-13 02:48:33.913396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.913425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.132 [2024-05-13 02:48:33.913546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.132 [2024-05-13 02:48:33.913566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.390 #19 NEW cov: 12015 ft: 14705 corp: 18/35b lim: 5 exec/s: 19 rss: 70Mb L: 2/4 MS: 1 CopyPart- 00:07:43.390 [2024-05-13 02:48:33.973463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.390 [2024-05-13 02:48:33.973491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.390 [2024-05-13 02:48:33.973615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.390 [2024-05-13 02:48:33.973634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.390 #20 NEW cov: 12015 ft: 14712 corp: 19/37b lim: 5 exec/s: 20 rss: 71Mb L: 2/4 MS: 1 InsertByte- 00:07:43.390 [2024-05-13 02:48:34.033673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.390 [2024-05-13 02:48:34.033700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.390 [2024-05-13 02:48:34.033837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.390 [2024-05-13 02:48:34.033856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.390 #21 NEW cov: 12015 ft: 14731 corp: 20/39b lim: 5 exec/s: 21 rss: 71Mb L: 2/4 MS: 1 ShuffleBytes- 00:07:43.390 [2024-05-13 02:48:34.093635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.390 [2024-05-13 02:48:34.093664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.390 #22 NEW cov: 12015 ft: 14771 corp: 21/40b lim: 5 exec/s: 22 rss: 71Mb L: 1/4 MS: 1 ChangeBit- 00:07:43.390 [2024-05-13 02:48:34.143770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.390 [2024-05-13 02:48:34.143799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.390 #23 NEW cov: 12015 ft: 14791 corp: 22/41b lim: 5 exec/s: 23 rss: 71Mb L: 1/4 MS: 1 ChangeByte- 00:07:43.648 [2024-05-13 02:48:34.194815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.648 [2024-05-13 02:48:34.194843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.648 [2024-05-13 02:48:34.194978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.648 [2024-05-13 02:48:34.194997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.648 [2024-05-13 02:48:34.195121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.648 [2024-05-13 02:48:34.195140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.648 [2024-05-13 02:48:34.195273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.648 [2024-05-13 02:48:34.195292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.648 #24 NEW cov: 12015 ft: 14811 corp: 23/45b lim: 5 exec/s: 24 rss: 71Mb L: 4/4 MS: 1 ChangeByte- 00:07:43.648 [2024-05-13 02:48:34.255081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.648 [2024-05-13 02:48:34.255109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.255247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.255266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.255411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.255429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.255560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.255577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.649 #25 NEW cov: 12015 ft: 14833 corp: 24/49b lim: 5 exec/s: 25 rss: 71Mb L: 4/4 MS: 1 CMP- DE: "\377\036"- 00:07:43.649 [2024-05-13 02:48:34.305398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.305424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.305558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.305576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.305712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.305731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.305863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.305881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.306015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.306035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.649 #26 NEW cov: 12015 ft: 14882 corp: 25/54b lim: 5 exec/s: 26 rss: 71Mb L: 5/5 MS: 1 InsertByte- 00:07:43.649 [2024-05-13 02:48:34.354435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.354463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.649 #27 NEW cov: 12015 ft: 14888 corp: 26/55b lim: 5 exec/s: 27 rss: 71Mb L: 1/5 MS: 1 ChangeByte- 00:07:43.649 [2024-05-13 02:48:34.405187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.405217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.405358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.405382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.649 [2024-05-13 02:48:34.405506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.649 [2024-05-13 02:48:34.405523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.649 #28 NEW cov: 12015 ft: 15062 corp: 27/58b lim: 5 exec/s: 28 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:07:43.907 [2024-05-13 02:48:34.465944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.907 [2024-05-13 02:48:34.465973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.907 [2024-05-13 02:48:34.466108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.907 [2024-05-13 02:48:34.466128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.907 [2024-05-13 02:48:34.466251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.907 [2024-05-13 02:48:34.466270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.907 [2024-05-13 02:48:34.466399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.907 [2024-05-13 02:48:34.466425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.907 [2024-05-13 02:48:34.466542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.907 [2024-05-13 02:48:34.466560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.907 #29 NEW cov: 12015 ft: 15081 corp: 28/63b lim: 5 exec/s: 29 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:43.907 [2024-05-13 02:48:34.515891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.907 [2024-05-13 02:48:34.515921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.516064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.516081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.516212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.516232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.516364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.516387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.908 #30 NEW cov: 12015 ft: 15082 corp: 29/67b lim: 5 exec/s: 30 rss: 71Mb L: 4/5 MS: 1 InsertByte- 00:07:43.908 [2024-05-13 02:48:34.575352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.575378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.575518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.575539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.908 #31 NEW cov: 12015 ft: 15094 corp: 30/69b lim: 5 exec/s: 31 rss: 71Mb L: 2/5 MS: 1 PersAutoDict- DE: "\377\036"- 00:07:43.908 [2024-05-13 02:48:34.625859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.625888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.626036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.626057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.626185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.626202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.908 #32 NEW cov: 12015 ft: 15103 corp: 31/72b lim: 5 exec/s: 32 rss: 71Mb L: 3/5 MS: 1 InsertByte- 00:07:43.908 [2024-05-13 02:48:34.676530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.676556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.676685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.676703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.676834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.676852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.676977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.676996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.908 [2024-05-13 02:48:34.677121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.908 [2024-05-13 02:48:34.677139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:43.908 #33 NEW cov: 12015 ft: 15124 corp: 32/77b lim: 5 exec/s: 16 rss: 71Mb L: 5/5 MS: 1 PersAutoDict- DE: "\377\036"- 00:07:43.908 #33 DONE cov: 12015 ft: 15124 corp: 32/77b lim: 5 exec/s: 16 rss: 71Mb 00:07:43.908 ###### Recommended dictionary. ###### 00:07:43.908 "\377\036" # Uses: 2 00:07:43.908 ###### End of recommended dictionary. ###### 00:07:43.908 Done 33 runs in 2 second(s) 00:07:43.908 [2024-05-13 02:48:34.707266] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.167 02:48:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:44.167 [2024-05-13 02:48:34.868517] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:44.167 [2024-05-13 02:48:34.868580] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501872 ] 00:07:44.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.425 [2024-05-13 02:48:35.079973] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.425 [2024-05-13 02:48:35.118122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.425 [2024-05-13 02:48:35.147480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.425 [2024-05-13 02:48:35.200142] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.425 [2024-05-13 02:48:35.216096] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:44.425 [2024-05-13 02:48:35.216506] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:44.684 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.684 INFO: Seed: 1090385724 00:07:44.684 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:44.684 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:44.684 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:44.684 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.684 #2 INITED exec/s: 0 rss: 63Mb 00:07:44.684 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.684 This may also happen if the target rejected all inputs we tried so far 00:07:44.684 [2024-05-13 02:48:35.272003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.684 [2024-05-13 02:48:35.272031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.684 [2024-05-13 02:48:35.272108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.684 [2024-05-13 02:48:35.272122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.684 [2024-05-13 02:48:35.272185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.684 [2024-05-13 02:48:35.272199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.943 NEW_FUNC[1/685]: 0x4b0ad0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:44.943 NEW_FUNC[2/685]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:44.943 #11 NEW cov: 11794 ft: 11786 corp: 2/27b lim: 40 exec/s: 0 rss: 70Mb L: 26/26 MS: 4 CopyPart-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:44.943 [2024-05-13 02:48:35.602711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.602744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.602818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.602832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.602889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.602902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.943 #17 NEW cov: 11924 ft: 12432 corp: 3/54b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 InsertByte- 00:07:44.943 [2024-05-13 02:48:35.652805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.652833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.652908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.652922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.652980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.652993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.943 #18 NEW cov: 11930 ft: 12704 corp: 4/81b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 ChangeBit- 00:07:44.943 [2024-05-13 02:48:35.692931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.692955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.693015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2bffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.693029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.693088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.693101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.943 #19 NEW cov: 12015 ft: 13088 corp: 5/108b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 ChangeByte- 00:07:44.943 [2024-05-13 02:48:35.733042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.733067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.733141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.733155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.943 [2024-05-13 02:48:35.733213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.943 [2024-05-13 02:48:35.733226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.202 #25 NEW cov: 12015 ft: 13209 corp: 6/134b lim: 40 exec/s: 0 rss: 70Mb L: 26/27 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:45.202 [2024-05-13 02:48:35.773079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.773104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.773179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff40ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.773196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.773254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.773268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.202 #26 NEW cov: 12015 ft: 13247 corp: 7/161b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 ChangeByte- 00:07:45.202 [2024-05-13 02:48:35.813234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.813259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.813332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.813346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.813404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.813417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.202 #27 NEW cov: 12015 ft: 13356 corp: 8/189b lim: 40 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 CrossOver- 00:07:45.202 [2024-05-13 02:48:35.853323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.853348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.853427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.853441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.853509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.853523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.202 #28 NEW cov: 12015 ft: 13369 corp: 9/215b lim: 40 exec/s: 0 rss: 70Mb L: 26/28 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:45.202 [2024-05-13 02:48:35.893613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.893637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.893693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2bffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.893707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.893765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.893777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.893833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ff0a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.893848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.202 #29 NEW cov: 12015 ft: 13866 corp: 10/250b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:45.202 [2024-05-13 02:48:35.943724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.943750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.943811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.943825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.943885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.943899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.943959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.943972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.202 #30 NEW cov: 12015 ft: 13911 corp: 11/285b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:45.202 [2024-05-13 02:48:35.993762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.993787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.993846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.993859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.202 [2024-05-13 02:48:35.993918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.202 [2024-05-13 02:48:35.993931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.461 #31 NEW cov: 12015 ft: 13935 corp: 12/316b lim: 40 exec/s: 0 rss: 70Mb L: 31/35 MS: 1 CopyPart- 00:07:45.461 [2024-05-13 02:48:36.033971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.033996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.461 [2024-05-13 02:48:36.034071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.034085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.461 [2024-05-13 02:48:36.034145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.034159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.461 [2024-05-13 02:48:36.034221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.034234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.461 #32 NEW cov: 12015 ft: 14002 corp: 13/351b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:45.461 [2024-05-13 02:48:36.084025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.084050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.461 [2024-05-13 02:48:36.084126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.084141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.461 [2024-05-13 02:48:36.084198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff4fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.084211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.461 #33 NEW cov: 12015 ft: 14009 corp: 14/379b lim: 40 exec/s: 0 rss: 70Mb L: 28/35 MS: 1 ChangeByte- 00:07:45.461 [2024-05-13 02:48:36.124103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:16ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.461 [2024-05-13 02:48:36.124128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.461 [2024-05-13 02:48:36.124203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.124216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.124273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.124287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.462 #34 NEW cov: 12015 ft: 14027 corp: 15/406b lim: 40 exec/s: 0 rss: 70Mb L: 27/35 MS: 1 ChangeBit- 00:07:45.462 [2024-05-13 02:48:36.164199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.164224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.164301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:712bffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.164315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.164374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.164391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.462 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:45.462 #35 NEW cov: 12038 ft: 14072 corp: 16/434b lim: 40 exec/s: 0 rss: 70Mb L: 28/35 MS: 1 InsertByte- 00:07:45.462 [2024-05-13 02:48:36.204473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.204499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.204556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.204570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.204626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.204639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.204696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.204709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.462 #36 NEW cov: 12038 ft: 14109 corp: 17/467b lim: 40 exec/s: 0 rss: 70Mb L: 33/35 MS: 1 CopyPart- 00:07:45.462 [2024-05-13 02:48:36.244528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.244554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.244613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffdff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.244626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.244685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.244699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.462 [2024-05-13 02:48:36.244756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00ffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.462 [2024-05-13 02:48:36.244768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.721 #37 NEW cov: 12038 ft: 14144 corp: 18/500b lim: 40 exec/s: 37 rss: 70Mb L: 33/35 MS: 1 EraseBytes- 00:07:45.721 [2024-05-13 02:48:36.284424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.284450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.284525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2bffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.284538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.721 #38 NEW cov: 12038 ft: 14416 corp: 19/520b lim: 40 exec/s: 38 rss: 71Mb L: 20/35 MS: 1 CrossOver- 00:07:45.721 [2024-05-13 02:48:36.334694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.334719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.334799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:712bffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.334813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.334871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.334884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.721 #39 NEW cov: 12038 ft: 14426 corp: 20/548b lim: 40 exec/s: 39 rss: 71Mb L: 28/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:45.721 [2024-05-13 02:48:36.374944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.374969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.375026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.375039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.375093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.375106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.375180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.375194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.721 #40 NEW cov: 12038 ft: 14445 corp: 21/583b lim: 40 exec/s: 40 rss: 71Mb L: 35/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:45.721 [2024-05-13 02:48:36.425121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.425145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.425222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.425236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.425297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff40ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.425310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.425369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.425387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.721 #41 NEW cov: 12038 ft: 14461 corp: 22/618b lim: 40 exec/s: 41 rss: 71Mb L: 35/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:45.721 [2024-05-13 02:48:36.474947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.474976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.475033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0a88 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.475046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.721 #47 NEW cov: 12038 ft: 14470 corp: 23/634b lim: 40 exec/s: 47 rss: 71Mb L: 16/35 MS: 1 EraseBytes- 00:07:45.721 [2024-05-13 02:48:36.515373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.515402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.515477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.515491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.515546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.515569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.721 [2024-05-13 02:48:36.515626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.721 [2024-05-13 02:48:36.515639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.980 #48 NEW cov: 12038 ft: 14477 corp: 24/670b lim: 40 exec/s: 48 rss: 71Mb L: 36/36 MS: 1 CrossOver- 00:07:45.980 [2024-05-13 02:48:36.555335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.555361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.980 [2024-05-13 02:48:36.555433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff008430 cdw11:a2c2b5f2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.555447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.980 [2024-05-13 02:48:36.555499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:c4ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.555512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.980 #49 NEW cov: 12038 ft: 14478 corp: 25/696b lim: 40 exec/s: 49 rss: 71Mb L: 26/36 MS: 1 CMP- DE: "\000\2040\242\302\265\362\304"- 00:07:45.980 [2024-05-13 02:48:36.595502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.595526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.980 [2024-05-13 02:48:36.595585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:0084309d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.595598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.980 [2024-05-13 02:48:36.595657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:b531ab40 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.595670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.980 #50 NEW cov: 12038 ft: 14488 corp: 26/723b lim: 40 exec/s: 50 rss: 71Mb L: 27/36 MS: 1 CMP- DE: "\000\2040\235\2651\253@"- 00:07:45.980 [2024-05-13 02:48:36.635697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.635721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.980 [2024-05-13 02:48:36.635777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.980 [2024-05-13 02:48:36.635790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.980 [2024-05-13 02:48:36.635849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff40ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.635861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.981 [2024-05-13 02:48:36.635934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.635947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.981 #51 NEW cov: 12038 ft: 14499 corp: 27/758b lim: 40 exec/s: 51 rss: 71Mb L: 35/36 MS: 1 ChangeBinInt- 00:07:45.981 [2024-05-13 02:48:36.685522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.685547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.981 #52 NEW cov: 12038 ft: 14812 corp: 28/773b lim: 40 exec/s: 52 rss: 71Mb L: 15/36 MS: 1 EraseBytes- 00:07:45.981 [2024-05-13 02:48:36.735981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.736006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.981 [2024-05-13 02:48:36.736064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.736077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.981 [2024-05-13 02:48:36.736132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ff060000 cdw11:000000fe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.736145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.981 [2024-05-13 02:48:36.736200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ff000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.736213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.981 #53 NEW cov: 12038 ft: 14831 corp: 29/808b lim: 40 exec/s: 53 rss: 71Mb L: 35/36 MS: 1 ChangeBinInt- 00:07:45.981 [2024-05-13 02:48:36.775969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.775996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.981 [2024-05-13 02:48:36.776055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.776069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.981 [2024-05-13 02:48:36.776127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff01 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.981 [2024-05-13 02:48:36.776140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.239 #54 NEW cov: 12038 ft: 14840 corp: 30/839b lim: 40 exec/s: 54 rss: 72Mb L: 31/36 MS: 1 EraseBytes- 00:07:46.239 [2024-05-13 02:48:36.816351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.816376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.239 [2024-05-13 02:48:36.816452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff00006a cdw11:6a6a6a6a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.816466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.239 [2024-05-13 02:48:36.816520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.816533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.239 [2024-05-13 02:48:36.816588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.816601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.239 [2024-05-13 02:48:36.816657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:ffff0a88 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.816670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.239 #55 NEW cov: 12038 ft: 14907 corp: 31/879b lim: 40 exec/s: 55 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:46.239 [2024-05-13 02:48:36.866245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:16ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.866269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.239 [2024-05-13 02:48:36.866345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.239 [2024-05-13 02:48:36.866359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.239 [2024-05-13 02:48:36.866416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.866429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.240 #56 NEW cov: 12038 ft: 14914 corp: 32/906b lim: 40 exec/s: 56 rss: 72Mb L: 27/40 MS: 1 ShuffleBytes- 00:07:46.240 [2024-05-13 02:48:36.906219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.906246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:36.906317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fcffffff cdw11:ffff0a88 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.906331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.240 #57 NEW cov: 12038 ft: 14931 corp: 33/922b lim: 40 exec/s: 57 rss: 72Mb L: 16/40 MS: 1 ChangeBinInt- 00:07:46.240 [2024-05-13 02:48:36.946474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.946500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:36.946575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.946589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:36.946647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.946660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.240 #58 NEW cov: 12038 ft: 14947 corp: 34/949b lim: 40 exec/s: 58 rss: 72Mb L: 27/40 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:46.240 [2024-05-13 02:48:36.986729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffefffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.986754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:36.986810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.986824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:36.986876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff40ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.986889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:36.986942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:36.986955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.240 #59 NEW cov: 12038 ft: 14957 corp: 35/984b lim: 40 exec/s: 59 rss: 72Mb L: 35/40 MS: 1 ChangeBit- 00:07:46.240 [2024-05-13 02:48:37.036688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ff64ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:37.036712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:37.036786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fdffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:37.036800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.240 [2024-05-13 02:48:37.036861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.240 [2024-05-13 02:48:37.036875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.499 #60 NEW cov: 12038 ft: 14962 corp: 36/1011b lim: 40 exec/s: 60 rss: 72Mb L: 27/40 MS: 1 ChangeByte- 00:07:46.499 [2024-05-13 02:48:37.076949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.076973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.077045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffdff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.077059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.077117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.077130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.077187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00ff0084 cdw11:00ffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.077200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.117052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:06ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.117077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.117132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fffffdff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.117145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.117203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffa0 cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.117216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.117272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00ff0084 cdw11:00ffff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.117286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.499 #62 NEW cov: 12038 ft: 14984 corp: 37/1044b lim: 40 exec/s: 62 rss: 72Mb L: 33/40 MS: 2 CrossOver-ChangeByte- 00:07:46.499 [2024-05-13 02:48:37.156932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.156957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.157032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0a0084 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.157046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.499 #63 NEW cov: 12038 ft: 15006 corp: 38/1067b lim: 40 exec/s: 63 rss: 72Mb L: 23/40 MS: 1 PersAutoDict- DE: "\000\2040\235\2651\253@"- 00:07:46.499 [2024-05-13 02:48:37.197180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a06ffff cdw11:ffffff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.197205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.499 [2024-05-13 02:48:37.197280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.499 [2024-05-13 02:48:37.197294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.500 [2024-05-13 02:48:37.197354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff4fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.500 [2024-05-13 02:48:37.197368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.500 #64 NEW cov: 12038 ft: 15043 corp: 39/1095b lim: 40 exec/s: 64 rss: 72Mb L: 28/40 MS: 1 ChangeByte- 00:07:46.500 [2024-05-13 02:48:37.237021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.500 [2024-05-13 02:48:37.237046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.500 #65 NEW cov: 12038 ft: 15050 corp: 40/1104b lim: 40 exec/s: 32 rss: 72Mb L: 9/40 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:46.500 #65 DONE cov: 12038 ft: 15050 corp: 40/1104b lim: 40 exec/s: 32 rss: 72Mb 00:07:46.500 ###### Recommended dictionary. ###### 00:07:46.500 "\000\000\000\000\000\000\000\000" # Uses: 3 00:07:46.500 "\001\000\000\000\000\000\000\000" # Uses: 4 00:07:46.500 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:46.500 "\000\2040\242\302\265\362\304" # Uses: 0 00:07:46.500 "\000\2040\235\2651\253@" # Uses: 1 00:07:46.500 ###### End of recommended dictionary. ###### 00:07:46.500 Done 65 runs in 2 second(s) 00:07:46.500 [2024-05-13 02:48:37.258557] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:46.758 02:48:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:46.758 [2024-05-13 02:48:37.417893] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:46.758 [2024-05-13 02:48:37.417965] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502353 ] 00:07:46.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.017 [2024-05-13 02:48:37.627300] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.017 [2024-05-13 02:48:37.664342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.017 [2024-05-13 02:48:37.695715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.017 [2024-05-13 02:48:37.748131] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.017 [2024-05-13 02:48:37.764080] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:47.017 [2024-05-13 02:48:37.764502] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:47.017 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.017 INFO: Seed: 3637368474 00:07:47.017 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:47.017 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:47.017 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:47.017 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.017 #2 INITED exec/s: 0 rss: 63Mb 00:07:47.017 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.017 This may also happen if the target rejected all inputs we tried so far 00:07:47.017 [2024-05-13 02:48:37.809160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.017 [2024-05-13 02:48:37.809194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.533 NEW_FUNC[1/686]: 0x4b2840 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:47.533 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:47.533 #10 NEW cov: 11806 ft: 11807 corp: 2/14b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:47.533 [2024-05-13 02:48:38.139901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.533 [2024-05-13 02:48:38.139941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.533 #26 NEW cov: 11936 ft: 12425 corp: 3/27b lim: 40 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:47.533 [2024-05-13 02:48:38.210012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.533 [2024-05-13 02:48:38.210044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.533 #27 NEW cov: 11942 ft: 12644 corp: 4/41b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 InsertByte- 00:07:47.533 [2024-05-13 02:48:38.260088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.533 [2024-05-13 02:48:38.260122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.533 #28 NEW cov: 12027 ft: 12854 corp: 5/55b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 InsertByte- 00:07:47.533 [2024-05-13 02:48:38.330296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.533 [2024-05-13 02:48:38.330326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.791 #29 NEW cov: 12027 ft: 12888 corp: 6/65b lim: 40 exec/s: 0 rss: 70Mb L: 10/14 MS: 1 EraseBytes- 00:07:47.791 [2024-05-13 02:48:38.380450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.791 [2024-05-13 02:48:38.380480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.791 [2024-05-13 02:48:38.380527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.791 [2024-05-13 02:48:38.380543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.791 #30 NEW cov: 12027 ft: 13664 corp: 7/81b lim: 40 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:07:47.791 [2024-05-13 02:48:38.440563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00b50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.791 [2024-05-13 02:48:38.440592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.791 #31 NEW cov: 12027 ft: 13713 corp: 8/95b lim: 40 exec/s: 0 rss: 70Mb L: 14/16 MS: 1 CopyPart- 00:07:47.791 [2024-05-13 02:48:38.510740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.791 [2024-05-13 02:48:38.510770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.791 #32 NEW cov: 12027 ft: 13735 corp: 9/108b lim: 40 exec/s: 0 rss: 70Mb L: 13/16 MS: 1 CopyPart- 00:07:47.791 [2024-05-13 02:48:38.560872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00b50000 cdw11:00b50000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.791 [2024-05-13 02:48:38.560901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.049 #33 NEW cov: 12027 ft: 13849 corp: 10/122b lim: 40 exec/s: 0 rss: 70Mb L: 14/16 MS: 1 ShuffleBytes- 00:07:48.049 [2024-05-13 02:48:38.631030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00002400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.049 [2024-05-13 02:48:38.631058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.049 #34 NEW cov: 12027 ft: 13958 corp: 11/132b lim: 40 exec/s: 0 rss: 70Mb L: 10/16 MS: 1 CrossOver- 00:07:48.049 [2024-05-13 02:48:38.701205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00b50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.049 [2024-05-13 02:48:38.701234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.049 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:48.049 #35 NEW cov: 12050 ft: 14001 corp: 12/146b lim: 40 exec/s: 0 rss: 71Mb L: 14/16 MS: 1 CopyPart- 00:07:48.049 [2024-05-13 02:48:38.751498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.049 [2024-05-13 02:48:38.751530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.049 [2024-05-13 02:48:38.751579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.049 [2024-05-13 02:48:38.751595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.049 [2024-05-13 02:48:38.751624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.049 [2024-05-13 02:48:38.751639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.049 #36 NEW cov: 12050 ft: 14231 corp: 13/171b lim: 40 exec/s: 36 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:48.049 [2024-05-13 02:48:38.821559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00b50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.049 [2024-05-13 02:48:38.821588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.307 #37 NEW cov: 12050 ft: 14271 corp: 14/185b lim: 40 exec/s: 37 rss: 71Mb L: 14/25 MS: 1 CopyPart- 00:07:48.307 [2024-05-13 02:48:38.891775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00002400 cdw11:00240100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.307 [2024-05-13 02:48:38.891805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.307 #41 NEW cov: 12050 ft: 14341 corp: 15/195b lim: 40 exec/s: 41 rss: 71Mb L: 10/25 MS: 4 EraseBytes-EraseBytes-CrossOver-CMP- DE: "\001\000\000\003"- 00:07:48.307 [2024-05-13 02:48:38.951904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.307 [2024-05-13 02:48:38.951934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.307 #42 NEW cov: 12050 ft: 14375 corp: 16/205b lim: 40 exec/s: 42 rss: 71Mb L: 10/25 MS: 1 CopyPart- 00:07:48.307 [2024-05-13 02:48:39.002023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:2ab50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.307 [2024-05-13 02:48:39.002053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.307 #43 NEW cov: 12050 ft: 14401 corp: 17/219b lim: 40 exec/s: 43 rss: 71Mb L: 14/25 MS: 1 ChangeByte- 00:07:48.307 [2024-05-13 02:48:39.052187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.307 [2024-05-13 02:48:39.052217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.307 [2024-05-13 02:48:39.052265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595946 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.307 [2024-05-13 02:48:39.052280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.307 #44 NEW cov: 12050 ft: 14466 corp: 18/235b lim: 40 exec/s: 44 rss: 71Mb L: 16/25 MS: 1 ChangeByte- 00:07:48.566 [2024-05-13 02:48:39.122341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b52d0000 cdw11:0000b500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.122370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.566 #45 NEW cov: 12050 ft: 14489 corp: 19/250b lim: 40 exec/s: 45 rss: 71Mb L: 15/25 MS: 1 InsertByte- 00:07:48.566 [2024-05-13 02:48:39.172522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00b50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.172550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.566 #46 NEW cov: 12050 ft: 14522 corp: 20/264b lim: 40 exec/s: 46 rss: 71Mb L: 14/25 MS: 1 CopyPart- 00:07:48.566 [2024-05-13 02:48:39.222717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.222747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.566 [2024-05-13 02:48:39.222780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595946 cdw11:592b5959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.222796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.566 #47 NEW cov: 12050 ft: 14557 corp: 21/280b lim: 40 exec/s: 47 rss: 71Mb L: 16/25 MS: 1 ChangeByte- 00:07:48.566 [2024-05-13 02:48:39.292814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00b50000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.292843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.566 #48 NEW cov: 12050 ft: 14570 corp: 22/294b lim: 40 exec/s: 48 rss: 71Mb L: 14/25 MS: 1 CrossOver- 00:07:48.566 [2024-05-13 02:48:39.363043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.363073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.566 [2024-05-13 02:48:39.363121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595946 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.566 [2024-05-13 02:48:39.363137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.824 #49 NEW cov: 12050 ft: 14614 corp: 23/310b lim: 40 exec/s: 49 rss: 72Mb L: 16/25 MS: 1 CopyPart- 00:07:48.824 [2024-05-13 02:48:39.433184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.824 [2024-05-13 02:48:39.433216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.824 #50 NEW cov: 12050 ft: 14642 corp: 24/323b lim: 40 exec/s: 50 rss: 72Mb L: 13/25 MS: 1 ShuffleBytes- 00:07:48.824 [2024-05-13 02:48:39.483351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.824 [2024-05-13 02:48:39.483389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.824 [2024-05-13 02:48:39.483424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595946 cdw11:592b5901 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.824 [2024-05-13 02:48:39.483440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.824 #51 NEW cov: 12050 ft: 14664 corp: 25/343b lim: 40 exec/s: 51 rss: 72Mb L: 20/25 MS: 1 PersAutoDict- DE: "\001\000\000\003"- 00:07:48.824 [2024-05-13 02:48:39.533428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.824 [2024-05-13 02:48:39.533458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.824 #52 NEW cov: 12050 ft: 14688 corp: 26/357b lim: 40 exec/s: 52 rss: 72Mb L: 14/25 MS: 1 EraseBytes- 00:07:48.824 [2024-05-13 02:48:39.593555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:9d51e371 cdw11:9f308400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.824 [2024-05-13 02:48:39.593584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.824 #53 NEW cov: 12050 ft: 14705 corp: 27/371b lim: 40 exec/s: 53 rss: 72Mb L: 14/25 MS: 1 CMP- DE: "\235Q\343q\2370\204\000"- 00:07:49.082 [2024-05-13 02:48:39.643807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.082 [2024-05-13 02:48:39.643837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.082 [2024-05-13 02:48:39.643885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59594659 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.082 [2024-05-13 02:48:39.643901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.082 #54 NEW cov: 12050 ft: 14755 corp: 28/390b lim: 40 exec/s: 54 rss: 72Mb L: 19/25 MS: 1 CopyPart- 00:07:49.082 [2024-05-13 02:48:39.713931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:b5000000 cdw11:0000b500 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.082 [2024-05-13 02:48:39.713962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.082 #55 NEW cov: 12050 ft: 14767 corp: 29/404b lim: 40 exec/s: 55 rss: 72Mb L: 14/25 MS: 1 ShuffleBytes- 00:07:49.082 [2024-05-13 02:48:39.764089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a595959 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.082 [2024-05-13 02:48:39.764118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.082 [2024-05-13 02:48:39.764166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:59595946 cdw11:592a5959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.082 [2024-05-13 02:48:39.764181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.082 #56 NEW cov: 12050 ft: 14848 corp: 30/420b lim: 40 exec/s: 28 rss: 72Mb L: 16/25 MS: 1 ChangeBit- 00:07:49.082 #56 DONE cov: 12050 ft: 14848 corp: 30/420b lim: 40 exec/s: 28 rss: 72Mb 00:07:49.082 ###### Recommended dictionary. ###### 00:07:49.082 "\001\000\000\003" # Uses: 1 00:07:49.082 "\235Q\343q\2370\204\000" # Uses: 0 00:07:49.082 ###### End of recommended dictionary. ###### 00:07:49.082 Done 56 runs in 2 second(s) 00:07:49.083 [2024-05-13 02:48:39.801856] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.342 02:48:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:49.342 [2024-05-13 02:48:39.963916] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:49.342 [2024-05-13 02:48:39.963992] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502882 ] 00:07:49.342 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.600 [2024-05-13 02:48:40.178424] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.600 [2024-05-13 02:48:40.218322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.600 [2024-05-13 02:48:40.250185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.600 [2024-05-13 02:48:40.302776] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.600 [2024-05-13 02:48:40.318724] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:49.600 [2024-05-13 02:48:40.319140] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:49.600 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.600 INFO: Seed: 1897427646 00:07:49.600 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:49.600 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:49.600 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:49.600 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.600 #2 INITED exec/s: 0 rss: 63Mb 00:07:49.600 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.601 This may also happen if the target rejected all inputs we tried so far 00:07:49.601 [2024-05-13 02:48:40.386873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.601 [2024-05-13 02:48:40.386912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.601 [2024-05-13 02:48:40.387005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.601 [2024-05-13 02:48:40.387021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.601 [2024-05-13 02:48:40.387098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:49.601 [2024-05-13 02:48:40.387113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.118 NEW_FUNC[1/686]: 0x4b45b0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:50.118 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.118 #3 NEW cov: 11795 ft: 11804 corp: 2/31b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:50.118 [2024-05-13 02:48:40.716769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.716824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.716971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.716995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.118 #4 NEW cov: 11934 ft: 12852 corp: 3/54b lim: 40 exec/s: 0 rss: 70Mb L: 23/30 MS: 1 EraseBytes- 00:07:50.118 [2024-05-13 02:48:40.777358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.777390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.777530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.777548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.777677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.777695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.777828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.777844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.118 #5 NEW cov: 11940 ft: 13393 corp: 4/86b lim: 40 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:50.118 [2024-05-13 02:48:40.826989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:660a6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.827019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.827158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.827178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.118 #6 NEW cov: 12025 ft: 13651 corp: 5/108b lim: 40 exec/s: 0 rss: 70Mb L: 22/32 MS: 1 CrossOver- 00:07:50.118 [2024-05-13 02:48:40.887488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.887519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.887670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.887689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.118 [2024-05-13 02:48:40.887844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.118 [2024-05-13 02:48:40.887864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.118 #7 NEW cov: 12025 ft: 13791 corp: 6/139b lim: 40 exec/s: 0 rss: 70Mb L: 31/32 MS: 1 InsertByte- 00:07:50.377 [2024-05-13 02:48:40.937327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:660a6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:40.937359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:40.937488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:40.937507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.377 #8 NEW cov: 12025 ft: 13859 corp: 7/161b lim: 40 exec/s: 0 rss: 70Mb L: 22/32 MS: 1 ShuffleBytes- 00:07:50.377 [2024-05-13 02:48:40.997527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:40.997556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:40.997695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:40.997714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.377 #9 NEW cov: 12025 ft: 13980 corp: 8/184b lim: 40 exec/s: 0 rss: 70Mb L: 23/32 MS: 1 CrossOver- 00:07:50.377 [2024-05-13 02:48:41.057981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.058012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:41.058152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:2c222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.058170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:41.058320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.058341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.377 #10 NEW cov: 12025 ft: 13997 corp: 9/214b lim: 40 exec/s: 0 rss: 70Mb L: 30/32 MS: 1 ChangeByte- 00:07:50.377 [2024-05-13 02:48:41.108201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.108230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:41.108368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.108393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:41.108529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:222222de SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.108549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.377 #11 NEW cov: 12025 ft: 14007 corp: 10/245b lim: 40 exec/s: 0 rss: 70Mb L: 31/32 MS: 1 ChangeBinInt- 00:07:50.377 [2024-05-13 02:48:41.168356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:220a2222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.168389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:41.168540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.168559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.377 [2024-05-13 02:48:41.168692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:222222de SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.377 [2024-05-13 02:48:41.168713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.636 #17 NEW cov: 12025 ft: 14043 corp: 11/276b lim: 40 exec/s: 0 rss: 70Mb L: 31/32 MS: 1 CrossOver- 00:07:50.636 [2024-05-13 02:48:41.228792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.228823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.228969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.228989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.229129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.229148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.229283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.229302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.636 #21 NEW cov: 12025 ft: 14078 corp: 12/310b lim: 40 exec/s: 0 rss: 70Mb L: 34/34 MS: 4 ShuffleBytes-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:50.636 [2024-05-13 02:48:41.278725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.278754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.278902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.278921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.279063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.279081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.636 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:50.636 #22 NEW cov: 12048 ft: 14128 corp: 13/336b lim: 40 exec/s: 0 rss: 71Mb L: 26/34 MS: 1 CopyPart- 00:07:50.636 [2024-05-13 02:48:41.339252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.339283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.339433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.339453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.339583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.339601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.339736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:dedddddd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.339753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.636 #23 NEW cov: 12048 ft: 14140 corp: 14/372b lim: 40 exec/s: 23 rss: 71Mb L: 36/36 MS: 1 CopyPart- 00:07:50.636 [2024-05-13 02:48:41.389355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.389389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.389533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1a1a1a1a cdw11:4141411a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.389553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.389686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.389704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.636 [2024-05-13 02:48:41.389836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.636 [2024-05-13 02:48:41.389855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.636 #24 NEW cov: 12048 ft: 14227 corp: 15/409b lim: 40 exec/s: 24 rss: 71Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:50.895 [2024-05-13 02:48:41.449520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffff22 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.449551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.449705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.449724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.449864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.449882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.450029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.450048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.895 #25 NEW cov: 12048 ft: 14255 corp: 16/443b lim: 40 exec/s: 25 rss: 71Mb L: 34/37 MS: 1 InsertRepeatedBytes- 00:07:50.895 [2024-05-13 02:48:41.499610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.499639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.499787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.499808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.499945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22220000 cdw11:00222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.499962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.500093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:222222de SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.500111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.895 #26 NEW cov: 12048 ft: 14332 corp: 17/482b lim: 40 exec/s: 26 rss: 71Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:50.895 [2024-05-13 02:48:41.559293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.559322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.559476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222229 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.559495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.895 #27 NEW cov: 12048 ft: 14374 corp: 18/505b lim: 40 exec/s: 27 rss: 71Mb L: 23/39 MS: 1 ChangeBinInt- 00:07:50.895 [2024-05-13 02:48:41.609439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.609468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.895 [2024-05-13 02:48:41.609620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222229 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.609639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.895 #28 NEW cov: 12048 ft: 14385 corp: 19/528b lim: 40 exec/s: 28 rss: 71Mb L: 23/39 MS: 1 ChangeBit- 00:07:50.895 [2024-05-13 02:48:41.669285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:660a6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.895 [2024-05-13 02:48:41.669316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.895 #29 NEW cov: 12048 ft: 15114 corp: 20/541b lim: 40 exec/s: 29 rss: 71Mb L: 13/39 MS: 1 EraseBytes- 00:07:51.154 [2024-05-13 02:48:41.720446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.720481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.720622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2222222d cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.720641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.720777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.720796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.720937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dedddddd cdw11:ddddf560 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.720954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.154 #30 NEW cov: 12048 ft: 15154 corp: 21/573b lim: 40 exec/s: 30 rss: 71Mb L: 32/39 MS: 1 InsertByte- 00:07:51.154 [2024-05-13 02:48:41.770573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.770604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.770753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.770770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.770911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:66666666 cdw11:663f6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.770931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.771072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.771091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.154 #31 NEW cov: 12048 ft: 15172 corp: 22/605b lim: 40 exec/s: 31 rss: 71Mb L: 32/39 MS: 1 ChangeByte- 00:07:51.154 [2024-05-13 02:48:41.820420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.820450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.820600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.820619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.820760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.820777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.154 #32 NEW cov: 12048 ft: 15210 corp: 23/636b lim: 40 exec/s: 32 rss: 71Mb L: 31/39 MS: 1 CrossOver- 00:07:51.154 [2024-05-13 02:48:41.870823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.870855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.871013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.871031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.871171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.871191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.154 [2024-05-13 02:48:41.871330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22292222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.871350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.154 #33 NEW cov: 12048 ft: 15215 corp: 24/673b lim: 40 exec/s: 33 rss: 71Mb L: 37/39 MS: 1 InsertRepeatedBytes- 00:07:51.154 [2024-05-13 02:48:41.920083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:660a6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.154 [2024-05-13 02:48:41.920114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.154 #34 NEW cov: 12048 ft: 15287 corp: 25/681b lim: 40 exec/s: 34 rss: 71Mb L: 8/39 MS: 1 EraseBytes- 00:07:51.412 [2024-05-13 02:48:41.980794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:660a6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:41.980824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:41.980968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:41.980985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:41.981128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:660a6666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:41.981147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.412 #35 NEW cov: 12048 ft: 15347 corp: 26/706b lim: 40 exec/s: 35 rss: 71Mb L: 25/39 MS: 1 CopyPart- 00:07:51.412 [2024-05-13 02:48:42.031297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.031327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.031479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.031495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.031644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:22222322 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.031663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.031805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22292222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.031827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.412 #36 NEW cov: 12048 ft: 15383 corp: 27/743b lim: 40 exec/s: 36 rss: 72Mb L: 37/39 MS: 1 ChangeBit- 00:07:51.412 [2024-05-13 02:48:42.091530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.091560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.091699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.091716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.091858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.091876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.092016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1a1a1a1a cdw11:2f1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.092037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.412 #37 NEW cov: 12048 ft: 15394 corp: 28/778b lim: 40 exec/s: 37 rss: 72Mb L: 35/39 MS: 1 InsertByte- 00:07:51.412 [2024-05-13 02:48:42.141514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a666666 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.141542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.141675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:222222de SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.141695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.141834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dddddd66 cdw11:663f6666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.141852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.141994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.142013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.412 #38 NEW cov: 12048 ft: 15398 corp: 29/810b lim: 40 exec/s: 38 rss: 72Mb L: 32/39 MS: 1 CrossOver- 00:07:51.412 [2024-05-13 02:48:42.201778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.201809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.201953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.201971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.412 [2024-05-13 02:48:42.202098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:22222222 cdw11:22220104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.412 [2024-05-13 02:48:42.202120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.413 [2024-05-13 02:48:42.202254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.413 [2024-05-13 02:48:42.202274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.671 #39 NEW cov: 12048 ft: 15427 corp: 30/843b lim: 40 exec/s: 39 rss: 72Mb L: 33/39 MS: 1 CMP- DE: "\001\004"- 00:07:51.671 [2024-05-13 02:48:42.261077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.261108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.671 #40 NEW cov: 12048 ft: 15435 corp: 31/855b lim: 40 exec/s: 40 rss: 72Mb L: 12/39 MS: 1 CrossOver- 00:07:51.671 [2024-05-13 02:48:42.321559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:22222222 cdw11:22222222 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.321588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.671 [2024-05-13 02:48:42.321726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000017 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.321744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.671 #41 NEW cov: 12048 ft: 15442 corp: 32/878b lim: 40 exec/s: 41 rss: 72Mb L: 23/39 MS: 1 ChangeBinInt- 00:07:51.671 [2024-05-13 02:48:42.372367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.372401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.671 [2024-05-13 02:48:42.372535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.372553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.671 [2024-05-13 02:48:42.372692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:5a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.372709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.671 [2024-05-13 02:48:42.372847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.671 [2024-05-13 02:48:42.372866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.671 #42 NEW cov: 12048 ft: 15453 corp: 33/912b lim: 40 exec/s: 21 rss: 72Mb L: 34/39 MS: 1 ChangeBit- 00:07:51.671 #42 DONE cov: 12048 ft: 15453 corp: 33/912b lim: 40 exec/s: 21 rss: 72Mb 00:07:51.671 ###### Recommended dictionary. ###### 00:07:51.671 "\001\004" # Uses: 0 00:07:51.671 ###### End of recommended dictionary. ###### 00:07:51.671 Done 42 runs in 2 second(s) 00:07:51.671 [2024-05-13 02:48:42.392333] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:51.930 02:48:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:51.930 [2024-05-13 02:48:42.554876] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:51.930 [2024-05-13 02:48:42.554939] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503194 ] 00:07:51.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.189 [2024-05-13 02:48:42.775717] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.189 [2024-05-13 02:48:42.815306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.189 [2024-05-13 02:48:42.844243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.189 [2024-05-13 02:48:42.896670] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.189 [2024-05-13 02:48:42.912627] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:52.189 [2024-05-13 02:48:42.913011] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:52.189 INFO: Running with entropic power schedule (0xFF, 100). 00:07:52.189 INFO: Seed: 195447329 00:07:52.189 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:52.189 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:52.189 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:52.189 INFO: A corpus is not provided, starting from an empty corpus 00:07:52.189 #2 INITED exec/s: 0 rss: 63Mb 00:07:52.189 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:52.189 This may also happen if the target rejected all inputs we tried so far 00:07:52.189 [2024-05-13 02:48:42.983868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.189 [2024-05-13 02:48:42.983907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.189 [2024-05-13 02:48:42.984000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.189 [2024-05-13 02:48:42.984015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.706 NEW_FUNC[1/685]: 0x4b6170 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:52.706 NEW_FUNC[2/685]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:52.706 #4 NEW cov: 11792 ft: 11793 corp: 2/20b lim: 40 exec/s: 0 rss: 70Mb L: 19/19 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:52.706 [2024-05-13 02:48:43.313922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.313968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.706 [2024-05-13 02:48:43.314104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000000c9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.314128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.706 #5 NEW cov: 11922 ft: 12554 corp: 3/40b lim: 40 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 InsertByte- 00:07:52.706 [2024-05-13 02:48:43.363862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.363890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.706 [2024-05-13 02:48:43.364022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.364042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.706 #6 NEW cov: 11928 ft: 12861 corp: 4/60b lim: 40 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:52.706 [2024-05-13 02:48:43.403762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.403792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.706 #9 NEW cov: 12013 ft: 13413 corp: 5/74b lim: 40 exec/s: 0 rss: 70Mb L: 14/20 MS: 3 CrossOver-CrossOver-CrossOver- 00:07:52.706 [2024-05-13 02:48:43.444029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.444056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.706 [2024-05-13 02:48:43.444175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.444193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.706 #10 NEW cov: 12013 ft: 13540 corp: 6/94b lim: 40 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:52.706 [2024-05-13 02:48:43.484233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000098 cdw11:7a01b5a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.484262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.706 [2024-05-13 02:48:43.484402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:30840000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.706 [2024-05-13 02:48:43.484430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 #11 NEW cov: 12013 ft: 13638 corp: 7/116b lim: 40 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 CMP- DE: "\230z\001\265\2410\204\000"- 00:07:52.965 [2024-05-13 02:48:43.524335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000098 cdw11:7a01b5a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.524361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.524498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:35840000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.524515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 #12 NEW cov: 12013 ft: 13692 corp: 8/138b lim: 40 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 ChangeASCIIInt- 00:07:52.965 [2024-05-13 02:48:43.574495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.574522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.574642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.574659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 #13 NEW cov: 12013 ft: 13836 corp: 9/158b lim: 40 exec/s: 0 rss: 70Mb L: 20/22 MS: 1 ChangeASCIIInt- 00:07:52.965 [2024-05-13 02:48:43.614560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.614589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.614712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.614730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 #14 NEW cov: 12013 ft: 13926 corp: 10/178b lim: 40 exec/s: 0 rss: 70Mb L: 20/22 MS: 1 ShuffleBytes- 00:07:52.965 [2024-05-13 02:48:43.654832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.654858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.654993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.655010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.655140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.655160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.965 #15 NEW cov: 12013 ft: 14177 corp: 11/206b lim: 40 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 PersAutoDict- DE: "\230z\001\265\2410\204\000"- 00:07:52.965 [2024-05-13 02:48:43.704826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.704852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.704982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.704999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 #16 NEW cov: 12013 ft: 14229 corp: 12/226b lim: 40 exec/s: 0 rss: 70Mb L: 20/28 MS: 1 ChangeASCIIInt- 00:07:52.965 [2024-05-13 02:48:43.745384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.745409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.745532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c6c6c6c6 cdw11:c6c6c6b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.745548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.745678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:a1308400 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.745695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.965 [2024-05-13 02:48:43.745820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000c900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.965 [2024-05-13 02:48:43.745837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.223 #17 NEW cov: 12013 ft: 14681 corp: 13/264b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:53.223 [2024-05-13 02:48:43.795028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.795056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.795193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.795213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.223 #18 NEW cov: 12013 ft: 14701 corp: 14/284b lim: 40 exec/s: 0 rss: 71Mb L: 20/38 MS: 1 ShuffleBytes- 00:07:53.223 [2024-05-13 02:48:43.835439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.835466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.835600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.835616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.835745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:37ffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.835762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.223 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:53.223 #19 NEW cov: 12036 ft: 14732 corp: 15/312b lim: 40 exec/s: 0 rss: 71Mb L: 28/38 MS: 1 ChangeBinInt- 00:07:53.223 [2024-05-13 02:48:43.875570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.875598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.875731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.875748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.875877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0037ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.875895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.223 #20 NEW cov: 12036 ft: 14760 corp: 16/340b lim: 40 exec/s: 0 rss: 71Mb L: 28/38 MS: 1 ShuffleBytes- 00:07:53.223 [2024-05-13 02:48:43.925406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00004b00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.925434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.925567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00c90000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.925584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.223 #21 NEW cov: 12036 ft: 14793 corp: 17/361b lim: 40 exec/s: 21 rss: 71Mb L: 21/38 MS: 1 InsertByte- 00:07:53.223 [2024-05-13 02:48:43.976072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.976100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.976221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:c6c6c6cf cdw11:c6c6c6c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.976238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.976387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b5a13084 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.976404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:43.976530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000c9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:43.976548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.223 #22 NEW cov: 12036 ft: 14820 corp: 18/400b lim: 40 exec/s: 22 rss: 71Mb L: 39/39 MS: 1 InsertByte- 00:07:53.223 [2024-05-13 02:48:44.025947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:44.025976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:44.026111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:44.026131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.223 [2024-05-13 02:48:44.026261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:37ffff00 cdw11:00000036 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.223 [2024-05-13 02:48:44.026281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.481 #23 NEW cov: 12036 ft: 14839 corp: 19/424b lim: 40 exec/s: 23 rss: 71Mb L: 24/39 MS: 1 EraseBytes- 00:07:53.481 [2024-05-13 02:48:44.065916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:007e0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.065946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.481 [2024-05-13 02:48:44.066075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.066095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.481 #24 NEW cov: 12036 ft: 14853 corp: 20/444b lim: 40 exec/s: 24 rss: 71Mb L: 20/39 MS: 1 ChangeByte- 00:07:53.481 [2024-05-13 02:48:44.106013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:57000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.106041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.481 [2024-05-13 02:48:44.106172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.106194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.481 #25 NEW cov: 12036 ft: 14863 corp: 21/463b lim: 40 exec/s: 25 rss: 71Mb L: 19/39 MS: 1 ChangeByte- 00:07:53.481 [2024-05-13 02:48:44.146129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.146157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.481 [2024-05-13 02:48:44.146285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.146303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.481 #26 NEW cov: 12036 ft: 14913 corp: 22/484b lim: 40 exec/s: 26 rss: 71Mb L: 21/39 MS: 1 CrossOver- 00:07:53.481 [2024-05-13 02:48:44.186445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.186473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.481 [2024-05-13 02:48:44.186597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.186616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.481 [2024-05-13 02:48:44.186742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000033 cdw11:987a01b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.186764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.481 #27 NEW cov: 12036 ft: 14931 corp: 23/512b lim: 40 exec/s: 27 rss: 71Mb L: 28/39 MS: 1 PersAutoDict- DE: "\230z\001\265\2410\204\000"- 00:07:53.481 [2024-05-13 02:48:44.236124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.236151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.481 #28 NEW cov: 12036 ft: 14971 corp: 24/526b lim: 40 exec/s: 28 rss: 71Mb L: 14/39 MS: 1 ChangeBinInt- 00:07:53.481 [2024-05-13 02:48:44.276486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.276516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.481 [2024-05-13 02:48:44.276650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.481 [2024-05-13 02:48:44.276668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.739 #29 NEW cov: 12036 ft: 14978 corp: 25/546b lim: 40 exec/s: 29 rss: 71Mb L: 20/39 MS: 1 ChangeBit- 00:07:53.739 [2024-05-13 02:48:44.316436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:02000000 cdw11:000000b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.739 [2024-05-13 02:48:44.316465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.739 #30 NEW cov: 12036 ft: 14984 corp: 26/560b lim: 40 exec/s: 30 rss: 71Mb L: 14/39 MS: 1 ChangeByte- 00:07:53.739 [2024-05-13 02:48:44.367002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.739 [2024-05-13 02:48:44.367030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.739 [2024-05-13 02:48:44.367160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.739 [2024-05-13 02:48:44.367179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.740 [2024-05-13 02:48:44.367317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.367334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.740 #31 NEW cov: 12036 ft: 15012 corp: 27/588b lim: 40 exec/s: 31 rss: 72Mb L: 28/39 MS: 1 ChangeBit- 00:07:53.740 [2024-05-13 02:48:44.406862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00400000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.406891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.740 [2024-05-13 02:48:44.407016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.407034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.740 #32 NEW cov: 12036 ft: 15069 corp: 28/607b lim: 40 exec/s: 32 rss: 72Mb L: 19/39 MS: 1 ChangeBit- 00:07:53.740 [2024-05-13 02:48:44.447045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.447076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.740 [2024-05-13 02:48:44.447198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.447217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.740 #33 NEW cov: 12036 ft: 15140 corp: 29/627b lim: 40 exec/s: 33 rss: 72Mb L: 20/39 MS: 1 CopyPart- 00:07:53.740 [2024-05-13 02:48:44.487429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.487457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.740 [2024-05-13 02:48:44.487579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.487598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.740 [2024-05-13 02:48:44.487716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000c900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.487732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.740 #34 NEW cov: 12036 ft: 15196 corp: 30/654b lim: 40 exec/s: 34 rss: 72Mb L: 27/39 MS: 1 CrossOver- 00:07:53.740 [2024-05-13 02:48:44.537017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0200987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.740 [2024-05-13 02:48:44.537044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.998 #40 NEW cov: 12036 ft: 15212 corp: 31/668b lim: 40 exec/s: 40 rss: 72Mb L: 14/39 MS: 1 PersAutoDict- DE: "\230z\001\265\2410\204\000"- 00:07:53.998 [2024-05-13 02:48:44.577303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.998 [2024-05-13 02:48:44.577330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.998 [2024-05-13 02:48:44.577464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.998 [2024-05-13 02:48:44.577481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.998 #41 NEW cov: 12036 ft: 15258 corp: 32/688b lim: 40 exec/s: 41 rss: 72Mb L: 20/39 MS: 1 ChangeBit- 00:07:53.998 [2024-05-13 02:48:44.617753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.998 [2024-05-13 02:48:44.617780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.998 [2024-05-13 02:48:44.617923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.998 [2024-05-13 02:48:44.617940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.998 [2024-05-13 02:48:44.618082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:37ffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.998 [2024-05-13 02:48:44.618101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.998 #42 NEW cov: 12036 ft: 15268 corp: 33/716b lim: 40 exec/s: 42 rss: 72Mb L: 28/39 MS: 1 ChangeASCIIInt- 00:07:53.998 [2024-05-13 02:48:44.657412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.998 [2024-05-13 02:48:44.657439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.998 #43 NEW cov: 12036 ft: 15269 corp: 34/731b lim: 40 exec/s: 43 rss: 72Mb L: 15/39 MS: 1 EraseBytes- 00:07:53.998 [2024-05-13 02:48:44.698005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:23000098 cdw11:7a01b5a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.999 [2024-05-13 02:48:44.698032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.999 [2024-05-13 02:48:44.698176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:32840000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.999 [2024-05-13 02:48:44.698195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.999 [2024-05-13 02:48:44.698321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00c90000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.999 [2024-05-13 02:48:44.698339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.999 #44 NEW cov: 12036 ft: 15284 corp: 35/760b lim: 40 exec/s: 44 rss: 72Mb L: 29/39 MS: 1 InsertByte- 00:07:53.999 [2024-05-13 02:48:44.737681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0200987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.999 [2024-05-13 02:48:44.737708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.999 #45 NEW cov: 12036 ft: 15306 corp: 36/775b lim: 40 exec/s: 45 rss: 72Mb L: 15/39 MS: 1 InsertByte- 00:07:53.999 [2024-05-13 02:48:44.778027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.999 [2024-05-13 02:48:44.778054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.999 [2024-05-13 02:48:44.778183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.999 [2024-05-13 02:48:44.778200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.257 #46 NEW cov: 12036 ft: 15312 corp: 37/795b lim: 40 exec/s: 46 rss: 72Mb L: 20/39 MS: 1 ChangeBinInt- 00:07:54.257 [2024-05-13 02:48:44.818303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.257 [2024-05-13 02:48:44.818331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.257 [2024-05-13 02:48:44.818470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.257 [2024-05-13 02:48:44.818487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.257 [2024-05-13 02:48:44.818612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.257 [2024-05-13 02:48:44.818632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.257 #47 NEW cov: 12036 ft: 15318 corp: 38/824b lim: 40 exec/s: 47 rss: 72Mb L: 29/39 MS: 1 InsertRepeatedBytes- 00:07:54.257 [2024-05-13 02:48:44.858497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000987a cdw11:01b5a130 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.257 [2024-05-13 02:48:44.858525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.257 [2024-05-13 02:48:44.858651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:84000084 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.257 [2024-05-13 02:48:44.858670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.258 [2024-05-13 02:48:44.858797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.258 [2024-05-13 02:48:44.858814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.258 #48 NEW cov: 12036 ft: 15329 corp: 39/852b lim: 40 exec/s: 48 rss: 72Mb L: 28/39 MS: 1 CopyPart- 00:07:54.258 [2024-05-13 02:48:44.898277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.258 [2024-05-13 02:48:44.898305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.258 [2024-05-13 02:48:44.898446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:c9000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.258 [2024-05-13 02:48:44.898463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.258 #49 NEW cov: 12036 ft: 15344 corp: 40/872b lim: 40 exec/s: 49 rss: 72Mb L: 20/39 MS: 1 CrossOver- 00:07:54.258 [2024-05-13 02:48:44.938451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.258 [2024-05-13 02:48:44.938478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.258 [2024-05-13 02:48:44.938618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.258 [2024-05-13 02:48:44.938635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.258 #50 NEW cov: 12036 ft: 15347 corp: 41/892b lim: 40 exec/s: 25 rss: 72Mb L: 20/39 MS: 1 ChangeASCIIInt- 00:07:54.258 #50 DONE cov: 12036 ft: 15347 corp: 41/892b lim: 40 exec/s: 25 rss: 72Mb 00:07:54.258 ###### Recommended dictionary. ###### 00:07:54.258 "\230z\001\265\2410\204\000" # Uses: 3 00:07:54.258 ###### End of recommended dictionary. ###### 00:07:54.258 Done 50 runs in 2 second(s) 00:07:54.258 [2024-05-13 02:48:44.965535] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.517 02:48:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:54.517 [2024-05-13 02:48:45.126532] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:54.517 [2024-05-13 02:48:45.126599] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503705 ] 00:07:54.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.776 [2024-05-13 02:48:45.348823] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.776 [2024-05-13 02:48:45.386988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.776 [2024-05-13 02:48:45.418124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.776 [2024-05-13 02:48:45.470297] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.776 [2024-05-13 02:48:45.486257] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:54.776 [2024-05-13 02:48:45.486668] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:54.776 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.776 INFO: Seed: 2771433490 00:07:54.776 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:54.776 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:54.776 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:54.776 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.776 #2 INITED exec/s: 0 rss: 63Mb 00:07:54.776 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:54.776 This may also happen if the target rejected all inputs we tried so far 00:07:54.776 [2024-05-13 02:48:45.542468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.776 [2024-05-13 02:48:45.542505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.776 [2024-05-13 02:48:45.542576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.776 [2024-05-13 02:48:45.542595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.776 [2024-05-13 02:48:45.542666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.776 [2024-05-13 02:48:45.542688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.034 NEW_FUNC[1/686]: 0x4b7d30 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:55.034 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:55.034 #10 NEW cov: 11786 ft: 11766 corp: 2/28b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 3 CMP-CrossOver-InsertRepeatedBytes- DE: "\001\000\000\000"- 00:07:55.293 [2024-05-13 02:48:45.853262] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.853307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.853400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.853422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.853501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.853520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.853595] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.853614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.293 #11 NEW cov: 11916 ft: 12623 corp: 3/58b lim: 35 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:07:55.293 [2024-05-13 02:48:45.903229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.903255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.903325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.903340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.903398] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.903412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.903469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.903483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.293 #12 NEW cov: 11922 ft: 12947 corp: 4/88b lim: 35 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ChangeBit- 00:07:55.293 [2024-05-13 02:48:45.953213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.953239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.953300] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.953314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:45.953374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.953395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.293 #13 NEW cov: 12007 ft: 13189 corp: 5/115b lim: 35 exec/s: 0 rss: 70Mb L: 27/30 MS: 1 CopyPart- 00:07:55.293 [2024-05-13 02:48:45.993046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:45.993072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.293 #14 NEW cov: 12007 ft: 13991 corp: 6/128b lim: 35 exec/s: 0 rss: 70Mb L: 13/30 MS: 1 CrossOver- 00:07:55.293 [2024-05-13 02:48:46.043616] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.043642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:46.043718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.043733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:46.043792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.043806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:46.043866] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.043880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.293 #15 NEW cov: 12007 ft: 14061 corp: 7/158b lim: 35 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:07:55.293 [2024-05-13 02:48:46.083746] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.083772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:46.083831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.083845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:46.083901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.083915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.293 [2024-05-13 02:48:46.083971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.293 [2024-05-13 02:48:46.083984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.552 #16 NEW cov: 12007 ft: 14145 corp: 8/191b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:55.552 [2024-05-13 02:48:46.133848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.133873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.552 [2024-05-13 02:48:46.133949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.133964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.552 [2024-05-13 02:48:46.134026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.134040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.552 [2024-05-13 02:48:46.134097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.134111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.552 #17 NEW cov: 12007 ft: 14231 corp: 9/224b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CrossOver- 00:07:55.552 [2024-05-13 02:48:46.184055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.184082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.552 [2024-05-13 02:48:46.184160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.184175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.552 [2024-05-13 02:48:46.184233] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.552 [2024-05-13 02:48:46.184246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.553 [2024-05-13 02:48:46.184306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.553 [2024-05-13 02:48:46.184320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.553 #18 NEW cov: 12007 ft: 14320 corp: 10/254b lim: 35 exec/s: 0 rss: 70Mb L: 30/33 MS: 1 CopyPart- 00:07:55.553 [2024-05-13 02:48:46.223665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.553 [2024-05-13 02:48:46.223691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.553 #19 NEW cov: 12007 ft: 14371 corp: 11/267b lim: 35 exec/s: 0 rss: 70Mb L: 13/33 MS: 1 ChangeByte- 00:07:55.553 [2024-05-13 02:48:46.273982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.553 [2024-05-13 02:48:46.274009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.553 [2024-05-13 02:48:46.274087] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.553 [2024-05-13 02:48:46.274101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.553 #20 NEW cov: 12007 ft: 14560 corp: 12/284b lim: 35 exec/s: 0 rss: 70Mb L: 17/33 MS: 1 EraseBytes- 00:07:55.553 [2024-05-13 02:48:46.313908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.553 [2024-05-13 02:48:46.313933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.553 #22 NEW cov: 12007 ft: 14566 corp: 13/292b lim: 35 exec/s: 0 rss: 70Mb L: 8/33 MS: 2 CrossOver-CopyPart- 00:07:55.553 [2024-05-13 02:48:46.354092] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.553 [2024-05-13 02:48:46.354117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.813 #23 NEW cov: 12007 ft: 14625 corp: 14/300b lim: 35 exec/s: 0 rss: 70Mb L: 8/33 MS: 1 ShuffleBytes- 00:07:55.813 [2024-05-13 02:48:46.404676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.404701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.404764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.404777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.404852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.404869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.404930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.404943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.813 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:55.813 #24 NEW cov: 12037 ft: 14659 corp: 15/333b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:55.813 [2024-05-13 02:48:46.444823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.444849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.444907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.444920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.444979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.444993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.445052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.445066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.813 #25 NEW cov: 12037 ft: 14667 corp: 16/366b lim: 35 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 ChangeByte- 00:07:55.813 [2024-05-13 02:48:46.484429] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.484454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.813 #26 NEW cov: 12037 ft: 14707 corp: 17/374b lim: 35 exec/s: 0 rss: 71Mb L: 8/33 MS: 1 EraseBytes- 00:07:55.813 [2024-05-13 02:48:46.535101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.535127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.535188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.535201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.535261] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.535278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.535337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.535350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.813 #27 NEW cov: 12037 ft: 14712 corp: 18/404b lim: 35 exec/s: 27 rss: 71Mb L: 30/33 MS: 1 EraseBytes- 00:07:55.813 [2024-05-13 02:48:46.585183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.585208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.585284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.585299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.585361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.585374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.813 [2024-05-13 02:48:46.585440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.813 [2024-05-13 02:48:46.585464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.813 #28 NEW cov: 12037 ft: 14733 corp: 19/438b lim: 35 exec/s: 28 rss: 71Mb L: 34/34 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:56.073 [2024-05-13 02:48:46.625002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.625027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.625102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.625116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.073 #29 NEW cov: 12037 ft: 14820 corp: 20/456b lim: 35 exec/s: 29 rss: 71Mb L: 18/34 MS: 1 InsertByte- 00:07:56.073 [2024-05-13 02:48:46.665421] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.665448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.665523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.665537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.665593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.665608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.665662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.665676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.073 #30 NEW cov: 12037 ft: 14839 corp: 21/489b lim: 35 exec/s: 30 rss: 71Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:56.073 [2024-05-13 02:48:46.715753] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.715778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.715838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.715852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.715947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.715960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.716016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.716029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.073 NEW_FUNC[1/2]: 0x4d91f0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:56.073 NEW_FUNC[2/2]: 0x11a4be0 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1759 00:07:56.073 #31 NEW cov: 12070 ft: 15021 corp: 22/524b lim: 35 exec/s: 31 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:56.073 [2024-05-13 02:48:46.765245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.765270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.073 #32 NEW cov: 12070 ft: 15036 corp: 23/532b lim: 35 exec/s: 32 rss: 71Mb L: 8/35 MS: 1 CrossOver- 00:07:56.073 [2024-05-13 02:48:46.805787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.805813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.805874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.805888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.805947] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.805962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.806023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.806037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.073 #33 NEW cov: 12070 ft: 15059 corp: 24/562b lim: 35 exec/s: 33 rss: 71Mb L: 30/35 MS: 1 ChangeByte- 00:07:56.073 [2024-05-13 02:48:46.855954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.855980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.856056] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.856070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.856133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.856147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.073 [2024-05-13 02:48:46.856205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.073 [2024-05-13 02:48:46.856219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.331 #34 NEW cov: 12070 ft: 15069 corp: 25/596b lim: 35 exec/s: 34 rss: 71Mb L: 34/35 MS: 1 CopyPart- 00:07:56.331 [2024-05-13 02:48:46.896041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.331 [2024-05-13 02:48:46.896067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.331 [2024-05-13 02:48:46.896130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.331 [2024-05-13 02:48:46.896144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:46.896205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:46.896220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:46.896281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:46.896294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.332 #35 NEW cov: 12070 ft: 15078 corp: 26/629b lim: 35 exec/s: 35 rss: 71Mb L: 33/35 MS: 1 ShuffleBytes- 00:07:56.332 [2024-05-13 02:48:46.945735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:46.945761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.332 #36 NEW cov: 12070 ft: 15094 corp: 27/638b lim: 35 exec/s: 36 rss: 72Mb L: 9/35 MS: 1 InsertByte- 00:07:56.332 [2024-05-13 02:48:46.995891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:46.995916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.332 #37 NEW cov: 12070 ft: 15104 corp: 28/651b lim: 35 exec/s: 37 rss: 72Mb L: 13/35 MS: 1 CrossOver- 00:07:56.332 [2024-05-13 02:48:47.046470] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.046495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.046576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.046591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.046653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.046666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.046727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.046743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.332 #38 NEW cov: 12070 ft: 15116 corp: 29/681b lim: 35 exec/s: 38 rss: 72Mb L: 30/35 MS: 1 ChangeBit- 00:07:56.332 [2024-05-13 02:48:47.086559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.086586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.086648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.086661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.086721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.086734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.086793] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.086806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.332 #39 NEW cov: 12070 ft: 15124 corp: 30/709b lim: 35 exec/s: 39 rss: 72Mb L: 28/35 MS: 1 InsertByte- 00:07:56.332 [2024-05-13 02:48:47.126572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.126599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.126661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.126675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.332 [2024-05-13 02:48:47.126735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.332 [2024-05-13 02:48:47.126749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.590 #40 NEW cov: 12070 ft: 15136 corp: 31/736b lim: 35 exec/s: 40 rss: 72Mb L: 27/35 MS: 1 ChangeBit- 00:07:56.590 [2024-05-13 02:48:47.166829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.590 [2024-05-13 02:48:47.166856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.590 [2024-05-13 02:48:47.166917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.590 [2024-05-13 02:48:47.166930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.590 [2024-05-13 02:48:47.166990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.590 [2024-05-13 02:48:47.167004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.590 [2024-05-13 02:48:47.167064] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.590 [2024-05-13 02:48:47.167077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.590 #41 NEW cov: 12070 ft: 15191 corp: 32/770b lim: 35 exec/s: 41 rss: 72Mb L: 34/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:56.591 [2024-05-13 02:48:47.216660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.216687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.216745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.216759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.591 #42 NEW cov: 12070 ft: 15200 corp: 33/784b lim: 35 exec/s: 42 rss: 72Mb L: 14/35 MS: 1 CrossOver- 00:07:56.591 [2024-05-13 02:48:47.256959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.256985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.257063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.257077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.257139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.257153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.591 #43 NEW cov: 12070 ft: 15225 corp: 34/811b lim: 35 exec/s: 43 rss: 72Mb L: 27/35 MS: 1 CopyPart- 00:07:56.591 [2024-05-13 02:48:47.296693] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.296719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.591 #44 NEW cov: 12070 ft: 15229 corp: 35/823b lim: 35 exec/s: 44 rss: 72Mb L: 12/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:56.591 [2024-05-13 02:48:47.347535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.347562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.347623] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.347636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.347695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.347711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.347769] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.347784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.347840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.347854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.591 #45 NEW cov: 12070 ft: 15234 corp: 36/858b lim: 35 exec/s: 45 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:56.591 [2024-05-13 02:48:47.387420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.387449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.387511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.387526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.387586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.387600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.591 [2024-05-13 02:48:47.387660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.591 [2024-05-13 02:48:47.387673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.850 #46 NEW cov: 12070 ft: 15301 corp: 37/889b lim: 35 exec/s: 46 rss: 72Mb L: 31/35 MS: 1 CopyPart- 00:07:56.850 [2024-05-13 02:48:47.437154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.437180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.850 #47 NEW cov: 12070 ft: 15353 corp: 38/902b lim: 35 exec/s: 47 rss: 72Mb L: 13/35 MS: 1 ChangeBit- 00:07:56.850 [2024-05-13 02:48:47.477718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.477743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.850 [2024-05-13 02:48:47.477822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000061 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.477836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.850 [2024-05-13 02:48:47.477942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.477956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.850 NEW_FUNC[1/2]: 0x4d26c0 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:56.850 NEW_FUNC[2/2]: 0x119d280 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1597 00:07:56.850 #48 NEW cov: 12127 ft: 15410 corp: 39/932b lim: 35 exec/s: 48 rss: 73Mb L: 30/35 MS: 1 ChangeBit- 00:07:56.850 [2024-05-13 02:48:47.528033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.528059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.850 [2024-05-13 02:48:47.528135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.528149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.850 [2024-05-13 02:48:47.528210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.528225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.850 [2024-05-13 02:48:47.528289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.528305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.850 [2024-05-13 02:48:47.528363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:8 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.850 [2024-05-13 02:48:47.528378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.850 #49 NEW cov: 12127 ft: 15421 corp: 40/967b lim: 35 exec/s: 24 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:56.850 #49 DONE cov: 12127 ft: 15421 corp: 40/967b lim: 35 exec/s: 24 rss: 73Mb 00:07:56.850 ###### Recommended dictionary. ###### 00:07:56.850 "\001\000\000\000" # Uses: 3 00:07:56.850 ###### End of recommended dictionary. ###### 00:07:56.850 Done 49 runs in 2 second(s) 00:07:56.850 [2024-05-13 02:48:47.556664] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:57.109 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:57.110 02:48:47 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:57.110 [2024-05-13 02:48:47.716880] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:57.110 [2024-05-13 02:48:47.716954] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504243 ] 00:07:57.110 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.368 [2024-05-13 02:48:47.929348] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.368 [2024-05-13 02:48:47.967029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.368 [2024-05-13 02:48:47.997931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.368 [2024-05-13 02:48:48.050103] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.368 [2024-05-13 02:48:48.066069] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:57.368 [2024-05-13 02:48:48.066478] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:57.368 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.368 INFO: Seed: 1054475069 00:07:57.368 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:57.368 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:57.368 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:57.368 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.368 #2 INITED exec/s: 0 rss: 63Mb 00:07:57.368 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.368 This may also happen if the target rejected all inputs we tried so far 00:07:57.627 NEW_FUNC[1/672]: 0x4b9270 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:57.627 NEW_FUNC[2/672]: 0x4d91f0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:57.885 #3 NEW cov: 11659 ft: 11648 corp: 2/10b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CMP- DE: "\377\377\377\377\377\377\377>"- 00:07:57.885 [2024-05-13 02:48:48.463321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.463372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.885 NEW_FUNC[1/14]: 0x1722ad0 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:07:57.885 NEW_FUNC[2/14]: 0x1722d10 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:07:57.885 #5 NEW cov: 11918 ft: 12533 corp: 3/19b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 2 ChangeByte-PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:07:57.885 [2024-05-13 02:48:48.503212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.503241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.885 #6 NEW cov: 11924 ft: 12714 corp: 4/28b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:57.885 [2024-05-13 02:48:48.543684] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.543712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.885 #7 NEW cov: 12009 ft: 13225 corp: 5/46b lim: 35 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 CrossOver- 00:07:57.885 [2024-05-13 02:48:48.593573] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.593601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.885 #8 NEW cov: 12009 ft: 13357 corp: 6/55b lim: 35 exec/s: 0 rss: 70Mb L: 9/18 MS: 1 ChangeBit- 00:07:57.885 [2024-05-13 02:48:48.634004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.634035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.885 [2024-05-13 02:48:48.634173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.634194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.885 #14 NEW cov: 12009 ft: 13559 corp: 7/72b lim: 35 exec/s: 0 rss: 70Mb L: 17/18 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:07:57.885 [2024-05-13 02:48:48.673787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:57.885 [2024-05-13 02:48:48.673816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.144 #15 NEW cov: 12009 ft: 13620 corp: 8/82b lim: 35 exec/s: 0 rss: 70Mb L: 10/18 MS: 1 InsertByte- 00:07:58.144 [2024-05-13 02:48:48.724144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.724173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.144 [2024-05-13 02:48:48.724318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.724337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.144 #16 NEW cov: 12009 ft: 13647 corp: 9/96b lim: 35 exec/s: 0 rss: 70Mb L: 14/18 MS: 1 CMP- DE: "\000\000\004\000"- 00:07:58.144 [2024-05-13 02:48:48.774222] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.774251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.144 [2024-05-13 02:48:48.774386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000072f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.774403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.144 #17 NEW cov: 12009 ft: 13679 corp: 10/113b lim: 35 exec/s: 0 rss: 70Mb L: 17/18 MS: 1 CopyPart- 00:07:58.144 [2024-05-13 02:48:48.824234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.824262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.144 #18 NEW cov: 12009 ft: 13711 corp: 11/122b lim: 35 exec/s: 0 rss: 70Mb L: 9/18 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:07:58.144 [2024-05-13 02:48:48.864619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.864648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.144 [2024-05-13 02:48:48.864771] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.864792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.144 #24 NEW cov: 12009 ft: 13746 corp: 12/136b lim: 35 exec/s: 0 rss: 70Mb L: 14/18 MS: 1 CopyPart- 00:07:58.144 #30 NEW cov: 12009 ft: 13770 corp: 13/146b lim: 35 exec/s: 0 rss: 70Mb L: 10/18 MS: 1 CrossOver- 00:07:58.144 [2024-05-13 02:48:48.945083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.945112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.144 [2024-05-13 02:48:48.945256] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.945275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.144 [2024-05-13 02:48:48.945428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.144 [2024-05-13 02:48:48.945449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.402 #31 NEW cov: 12009 ft: 14005 corp: 14/170b lim: 35 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:07:58.402 [2024-05-13 02:48:48.995467] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:48.995495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.402 [2024-05-13 02:48:48.995650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:48.995671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.402 [2024-05-13 02:48:48.995813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:48.995834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.402 [2024-05-13 02:48:48.995977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:48.995996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.402 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:58.402 #32 NEW cov: 12026 ft: 14460 corp: 15/204b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:58.402 [2024-05-13 02:48:49.054923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:49.054952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.402 #33 NEW cov: 12026 ft: 14531 corp: 16/214b lim: 35 exec/s: 0 rss: 70Mb L: 10/34 MS: 1 ChangeBit- 00:07:58.402 [2024-05-13 02:48:49.094642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:49.094670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.402 #34 NEW cov: 12026 ft: 14598 corp: 17/225b lim: 35 exec/s: 34 rss: 70Mb L: 11/34 MS: 1 CrossOver- 00:07:58.402 [2024-05-13 02:48:49.135903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:49.135931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.402 [2024-05-13 02:48:49.136058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:49.136077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.402 [2024-05-13 02:48:49.136201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000132 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:49.136220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.402 #35 NEW cov: 12026 ft: 14640 corp: 18/258b lim: 35 exec/s: 35 rss: 70Mb L: 33/34 MS: 1 InsertRepeatedBytes- 00:07:58.402 [2024-05-13 02:48:49.185589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.402 [2024-05-13 02:48:49.185616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.661 #36 NEW cov: 12026 ft: 14651 corp: 19/274b lim: 35 exec/s: 36 rss: 70Mb L: 16/34 MS: 1 CrossOver- 00:07:58.661 [2024-05-13 02:48:49.235413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.235442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.661 #37 NEW cov: 12026 ft: 14688 corp: 20/281b lim: 35 exec/s: 37 rss: 71Mb L: 7/34 MS: 1 EraseBytes- 00:07:58.661 [2024-05-13 02:48:49.285496] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.285524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.661 #38 NEW cov: 12026 ft: 14727 corp: 21/290b lim: 35 exec/s: 38 rss: 71Mb L: 9/34 MS: 1 EraseBytes- 00:07:58.661 [2024-05-13 02:48:49.325889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.325915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.661 [2024-05-13 02:48:49.326045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.326064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.661 #39 NEW cov: 12026 ft: 14766 corp: 22/304b lim: 35 exec/s: 39 rss: 71Mb L: 14/34 MS: 1 ShuffleBytes- 00:07:58.661 [2024-05-13 02:48:49.366072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.366098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.661 [2024-05-13 02:48:49.366224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.366241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.661 #40 NEW cov: 12026 ft: 14779 corp: 23/318b lim: 35 exec/s: 40 rss: 71Mb L: 14/34 MS: 1 ChangeByte- 00:07:58.661 [2024-05-13 02:48:49.415850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.415877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.661 #41 NEW cov: 12026 ft: 14807 corp: 24/328b lim: 35 exec/s: 41 rss: 71Mb L: 10/34 MS: 1 ChangeByte- 00:07:58.661 [2024-05-13 02:48:49.455978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000c1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.661 [2024-05-13 02:48:49.456004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.920 #42 NEW cov: 12026 ft: 14815 corp: 25/337b lim: 35 exec/s: 42 rss: 71Mb L: 9/34 MS: 1 ShuffleBytes- 00:07:58.920 [2024-05-13 02:48:49.496329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000720 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.496357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.496501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.496522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.920 #43 NEW cov: 12026 ft: 14832 corp: 26/351b lim: 35 exec/s: 43 rss: 71Mb L: 14/34 MS: 1 ChangeBit- 00:07:58.920 [2024-05-13 02:48:49.536251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.536279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.920 #44 NEW cov: 12026 ft: 14848 corp: 27/359b lim: 35 exec/s: 44 rss: 71Mb L: 8/34 MS: 1 EraseBytes- 00:07:58.920 [2024-05-13 02:48:49.577371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.577401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.577533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.577551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.577690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.577708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.577852] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.577871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.578015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.578034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:58.920 #45 NEW cov: 12026 ft: 14911 corp: 28/394b lim: 35 exec/s: 45 rss: 71Mb L: 35/35 MS: 1 InsertByte- 00:07:58.920 [2024-05-13 02:48:49.637159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.637186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.637321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.637340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:58.920 #46 NEW cov: 12026 ft: 14919 corp: 29/420b lim: 35 exec/s: 46 rss: 71Mb L: 26/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:07:58.920 [2024-05-13 02:48:49.676435] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.676463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:58.920 #47 NEW cov: 12026 ft: 14920 corp: 30/429b lim: 35 exec/s: 47 rss: 71Mb L: 9/35 MS: 1 ChangeByte- 00:07:58.920 [2024-05-13 02:48:49.717359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.717391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:58.920 [2024-05-13 02:48:49.717533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.920 [2024-05-13 02:48:49.717554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.179 #48 NEW cov: 12026 ft: 14926 corp: 31/455b lim: 35 exec/s: 48 rss: 71Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:59.179 [2024-05-13 02:48:49.767533] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.767561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.767701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000012f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.767719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.767853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.767871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.179 #49 NEW cov: 12026 ft: 14949 corp: 32/487b lim: 35 exec/s: 49 rss: 71Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:59.179 [2024-05-13 02:48:49.807205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.807233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.807357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.807387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.179 #50 NEW cov: 12026 ft: 14954 corp: 33/501b lim: 35 exec/s: 50 rss: 71Mb L: 14/35 MS: 1 ChangeByte- 00:07:59.179 [2024-05-13 02:48:49.847798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.847824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.847960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.847977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.179 #51 NEW cov: 12026 ft: 14972 corp: 34/527b lim: 35 exec/s: 51 rss: 71Mb L: 26/35 MS: 1 ChangeByte- 00:07:59.179 [2024-05-13 02:48:49.887868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.887897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.888027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000003e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.888046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.888180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.888198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.888332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.888351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.179 #52 NEW cov: 12026 ft: 14973 corp: 35/561b lim: 35 exec/s: 52 rss: 72Mb L: 34/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377>"- 00:07:59.179 [2024-05-13 02:48:49.938181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.938211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.938342] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.938359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.179 [2024-05-13 02:48:49.938510] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000012f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.179 [2024-05-13 02:48:49.938528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.179 #53 NEW cov: 12026 ft: 15023 corp: 36/594b lim: 35 exec/s: 53 rss: 72Mb L: 33/35 MS: 1 CopyPart- 00:07:59.438 [2024-05-13 02:48:49.987649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:49.987676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.438 #54 NEW cov: 12033 ft: 15051 corp: 37/603b lim: 35 exec/s: 54 rss: 72Mb L: 9/35 MS: 1 ChangeBit- 00:07:59.438 [2024-05-13 02:48:50.027602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000b9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:50.027629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.438 #58 NEW cov: 12033 ft: 15125 corp: 38/611b lim: 35 exec/s: 58 rss: 72Mb L: 8/35 MS: 4 InsertByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:59.438 [2024-05-13 02:48:50.068434] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:50.068463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.438 [2024-05-13 02:48:50.068597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:50.068614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.438 [2024-05-13 02:48:50.068752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:50.068770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.438 [2024-05-13 02:48:50.068909] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:50.068927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.438 #59 NEW cov: 12033 ft: 15137 corp: 39/645b lim: 35 exec/s: 59 rss: 72Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:07:59.438 [2024-05-13 02:48:50.118317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.438 [2024-05-13 02:48:50.118344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.438 #60 NEW cov: 12033 ft: 15155 corp: 40/663b lim: 35 exec/s: 30 rss: 72Mb L: 18/35 MS: 1 ChangeBit- 00:07:59.438 #60 DONE cov: 12033 ft: 15155 corp: 40/663b lim: 35 exec/s: 30 rss: 72Mb 00:07:59.438 ###### Recommended dictionary. ###### 00:07:59.438 "\377\377\377\377\377\377\377>" # Uses: 5 00:07:59.438 "\000\000\004\000" # Uses: 0 00:07:59.438 ###### End of recommended dictionary. ###### 00:07:59.438 Done 60 runs in 2 second(s) 00:07:59.438 [2024-05-13 02:48:50.138944] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:59.697 02:48:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:59.697 [2024-05-13 02:48:50.303646] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:07:59.697 [2024-05-13 02:48:50.303709] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504563 ] 00:07:59.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.955 [2024-05-13 02:48:50.517435] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.955 [2024-05-13 02:48:50.555429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.955 [2024-05-13 02:48:50.584632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.955 [2024-05-13 02:48:50.636988] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.955 [2024-05-13 02:48:50.652947] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:59.955 [2024-05-13 02:48:50.653348] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:59.955 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.955 INFO: Seed: 3642482899 00:07:59.955 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:07:59.955 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:07:59.955 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:59.955 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.955 #2 INITED exec/s: 0 rss: 63Mb 00:07:59.955 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.955 This may also happen if the target rejected all inputs we tried so far 00:07:59.955 [2024-05-13 02:48:50.730193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.955 [2024-05-13 02:48:50.730235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.955 [2024-05-13 02:48:50.730298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:59.955 [2024-05-13 02:48:50.730319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.489 NEW_FUNC[1/686]: 0x4ba720 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:00.489 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:00.489 #6 NEW cov: 11878 ft: 11879 corp: 2/45b lim: 105 exec/s: 0 rss: 70Mb L: 44/44 MS: 4 ChangeBit-CopyPart-CopyPart-InsertRepeatedBytes- 00:08:00.489 [2024-05-13 02:48:51.060403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.489 [2024-05-13 02:48:51.060448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.489 #12 NEW cov: 12008 ft: 13015 corp: 3/75b lim: 105 exec/s: 0 rss: 70Mb L: 30/44 MS: 1 EraseBytes- 00:08:00.489 [2024-05-13 02:48:51.120512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.489 [2024-05-13 02:48:51.120544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.489 #13 NEW cov: 12014 ft: 13358 corp: 4/105b lim: 105 exec/s: 0 rss: 70Mb L: 30/44 MS: 1 ChangeBit- 00:08:00.489 [2024-05-13 02:48:51.180588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.489 [2024-05-13 02:48:51.180623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.489 #14 NEW cov: 12099 ft: 13620 corp: 5/129b lim: 105 exec/s: 0 rss: 70Mb L: 24/44 MS: 1 EraseBytes- 00:08:00.489 [2024-05-13 02:48:51.230829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.489 [2024-05-13 02:48:51.230861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.489 #15 NEW cov: 12099 ft: 13722 corp: 6/155b lim: 105 exec/s: 0 rss: 70Mb L: 26/44 MS: 1 EraseBytes- 00:08:00.489 [2024-05-13 02:48:51.280920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446702288425713663 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.489 [2024-05-13 02:48:51.280955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.747 #16 NEW cov: 12099 ft: 13772 corp: 7/181b lim: 105 exec/s: 0 rss: 70Mb L: 26/44 MS: 1 ChangeByte- 00:08:00.747 [2024-05-13 02:48:51.341291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.747 [2024-05-13 02:48:51.341326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.748 #17 NEW cov: 12099 ft: 13828 corp: 8/207b lim: 105 exec/s: 0 rss: 70Mb L: 26/44 MS: 1 ChangeBinInt- 00:08:00.748 [2024-05-13 02:48:51.391282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.748 [2024-05-13 02:48:51.391313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.748 #18 NEW cov: 12099 ft: 13862 corp: 9/234b lim: 105 exec/s: 0 rss: 70Mb L: 27/44 MS: 1 EraseBytes- 00:08:00.748 [2024-05-13 02:48:51.451628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.748 [2024-05-13 02:48:51.451662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.748 #19 NEW cov: 12099 ft: 13908 corp: 10/265b lim: 105 exec/s: 0 rss: 70Mb L: 31/44 MS: 1 InsertByte- 00:08:00.748 [2024-05-13 02:48:51.501637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.748 [2024-05-13 02:48:51.501668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.748 #20 NEW cov: 12099 ft: 13953 corp: 11/292b lim: 105 exec/s: 0 rss: 70Mb L: 27/44 MS: 1 InsertByte- 00:08:01.006 [2024-05-13 02:48:51.561850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65282 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.006 [2024-05-13 02:48:51.561885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.006 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:01.006 #21 NEW cov: 12122 ft: 13994 corp: 12/319b lim: 105 exec/s: 0 rss: 70Mb L: 27/44 MS: 1 ChangeBinInt- 00:08:01.006 [2024-05-13 02:48:51.622040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867568922 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.006 [2024-05-13 02:48:51.622070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.006 #22 NEW cov: 12122 ft: 14017 corp: 13/360b lim: 105 exec/s: 0 rss: 70Mb L: 41/44 MS: 1 CopyPart- 00:08:01.006 [2024-05-13 02:48:51.672377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867568922 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.006 [2024-05-13 02:48:51.672408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.006 #23 NEW cov: 12122 ft: 14027 corp: 14/401b lim: 105 exec/s: 23 rss: 70Mb L: 41/44 MS: 1 ChangeByte- 00:08:01.006 [2024-05-13 02:48:51.732306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.006 [2024-05-13 02:48:51.732342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.006 #24 NEW cov: 12122 ft: 14045 corp: 15/428b lim: 105 exec/s: 24 rss: 70Mb L: 27/44 MS: 1 CrossOver- 00:08:01.006 [2024-05-13 02:48:51.792595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.006 [2024-05-13 02:48:51.792631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.263 #25 NEW cov: 12122 ft: 14061 corp: 16/455b lim: 105 exec/s: 25 rss: 71Mb L: 27/44 MS: 1 ShuffleBytes- 00:08:01.263 [2024-05-13 02:48:51.852648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.263 [2024-05-13 02:48:51.852686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.263 #26 NEW cov: 12122 ft: 14081 corp: 17/486b lim: 105 exec/s: 26 rss: 71Mb L: 31/44 MS: 1 ChangeBit- 00:08:01.263 [2024-05-13 02:48:51.912902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.263 [2024-05-13 02:48:51.912932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.263 #27 NEW cov: 12122 ft: 14093 corp: 18/516b lim: 105 exec/s: 27 rss: 71Mb L: 30/44 MS: 1 CopyPart- 00:08:01.263 [2024-05-13 02:48:51.963084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.263 [2024-05-13 02:48:51.963117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.263 #28 NEW cov: 12122 ft: 14122 corp: 19/540b lim: 105 exec/s: 28 rss: 71Mb L: 24/44 MS: 1 CrossOver- 00:08:01.263 [2024-05-13 02:48:52.013296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2738188569599254783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.263 [2024-05-13 02:48:52.013324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.263 #33 NEW cov: 12122 ft: 14134 corp: 20/565b lim: 105 exec/s: 33 rss: 71Mb L: 25/44 MS: 5 EraseBytes-ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:01.263 [2024-05-13 02:48:52.063490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867568922 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.263 [2024-05-13 02:48:52.063517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.521 #34 NEW cov: 12122 ft: 14217 corp: 21/605b lim: 105 exec/s: 34 rss: 71Mb L: 40/44 MS: 1 EraseBytes- 00:08:01.521 [2024-05-13 02:48:52.113469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.521 [2024-05-13 02:48:52.113502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.521 #35 NEW cov: 12122 ft: 14249 corp: 22/633b lim: 105 exec/s: 35 rss: 71Mb L: 28/44 MS: 1 InsertByte- 00:08:01.521 [2024-05-13 02:48:52.173733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.521 [2024-05-13 02:48:52.173766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.521 #36 NEW cov: 12122 ft: 14256 corp: 23/657b lim: 105 exec/s: 36 rss: 71Mb L: 24/44 MS: 1 ChangeByte- 00:08:01.521 [2024-05-13 02:48:52.223826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65459 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.521 [2024-05-13 02:48:52.223853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.521 #37 NEW cov: 12122 ft: 14269 corp: 24/682b lim: 105 exec/s: 37 rss: 71Mb L: 25/44 MS: 1 InsertByte- 00:08:01.522 [2024-05-13 02:48:52.274014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65459 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.522 [2024-05-13 02:48:52.274042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.522 #38 NEW cov: 12122 ft: 14291 corp: 25/705b lim: 105 exec/s: 38 rss: 71Mb L: 23/44 MS: 1 EraseBytes- 00:08:01.780 [2024-05-13 02:48:52.334177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.334209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.780 #39 NEW cov: 12122 ft: 14306 corp: 26/736b lim: 105 exec/s: 39 rss: 71Mb L: 31/44 MS: 1 CopyPart- 00:08:01.780 [2024-05-13 02:48:52.395042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446743691910447103 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.395076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.780 [2024-05-13 02:48:52.395179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.395203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.780 [2024-05-13 02:48:52.395330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.395355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.780 [2024-05-13 02:48:52.395489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:12080808863958804391 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.395513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.780 #40 NEW cov: 12122 ft: 14853 corp: 27/829b lim: 105 exec/s: 40 rss: 71Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:08:01.780 [2024-05-13 02:48:52.444513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2738188569599254783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.444547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.780 #41 NEW cov: 12122 ft: 14901 corp: 28/854b lim: 105 exec/s: 41 rss: 71Mb L: 25/93 MS: 1 ChangeBit- 00:08:01.780 [2024-05-13 02:48:52.504684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867569151 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.504710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.780 #45 NEW cov: 12122 ft: 14906 corp: 29/886b lim: 105 exec/s: 45 rss: 71Mb L: 32/93 MS: 4 EraseBytes-InsertByte-ChangeBinInt-CrossOver- 00:08:01.780 [2024-05-13 02:48:52.554851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:01.780 [2024-05-13 02:48:52.554879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.780 #46 NEW cov: 12122 ft: 14921 corp: 30/913b lim: 105 exec/s: 46 rss: 72Mb L: 27/93 MS: 1 CrossOver- 00:08:02.039 [2024-05-13 02:48:52.615039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867568922 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.039 [2024-05-13 02:48:52.615070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.039 #47 NEW cov: 12122 ft: 14956 corp: 31/949b lim: 105 exec/s: 47 rss: 72Mb L: 36/93 MS: 1 EraseBytes- 00:08:02.039 [2024-05-13 02:48:52.665423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069867568922 len:5824 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.039 [2024-05-13 02:48:52.665457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.039 [2024-05-13 02:48:52.665606] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.039 [2024-05-13 02:48:52.665634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.039 #48 NEW cov: 12122 ft: 14985 corp: 32/997b lim: 105 exec/s: 48 rss: 72Mb L: 48/93 MS: 1 CMP- DE: "\026\277c\010\2470\204\000"- 00:08:02.039 [2024-05-13 02:48:52.725452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18437736870612828159 len:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:02.039 [2024-05-13 02:48:52.725480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.039 #52 NEW cov: 12122 ft: 15010 corp: 33/1018b lim: 105 exec/s: 26 rss: 72Mb L: 21/93 MS: 4 EraseBytes-ChangeByte-ChangeBit-CMP- DE: "\001\000\002\000"- 00:08:02.039 #52 DONE cov: 12122 ft: 15010 corp: 33/1018b lim: 105 exec/s: 26 rss: 72Mb 00:08:02.039 ###### Recommended dictionary. ###### 00:08:02.039 "\026\277c\010\2470\204\000" # Uses: 0 00:08:02.039 "\001\000\002\000" # Uses: 0 00:08:02.039 ###### End of recommended dictionary. ###### 00:08:02.039 Done 52 runs in 2 second(s) 00:08:02.039 [2024-05-13 02:48:52.746146] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:02.297 02:48:52 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:02.297 [2024-05-13 02:48:52.907534] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:02.297 [2024-05-13 02:48:52.907598] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505064 ] 00:08:02.297 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.555 [2024-05-13 02:48:53.125126] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.555 [2024-05-13 02:48:53.163469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.556 [2024-05-13 02:48:53.194429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.556 [2024-05-13 02:48:53.246567] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.556 [2024-05-13 02:48:53.262528] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:02.556 [2024-05-13 02:48:53.262925] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:02.556 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.556 INFO: Seed: 1955505418 00:08:02.556 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:02.556 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:02.556 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:02.556 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.556 #2 INITED exec/s: 0 rss: 63Mb 00:08:02.556 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.556 This may also happen if the target rejected all inputs we tried so far 00:08:02.556 [2024-05-13 02:48:53.311744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.556 [2024-05-13 02:48:53.311776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.813 NEW_FUNC[1/687]: 0x4bdaa0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:02.813 NEW_FUNC[2/687]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.813 #10 NEW cov: 11899 ft: 11900 corp: 2/43b lim: 120 exec/s: 0 rss: 70Mb L: 42/42 MS: 3 CMP-CrossOver-InsertRepeatedBytes- DE: "\377\377\377~"- 00:08:03.070 [2024-05-13 02:48:53.622520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.070 [2024-05-13 02:48:53.622568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.070 #16 NEW cov: 12029 ft: 12479 corp: 3/85b lim: 120 exec/s: 0 rss: 70Mb L: 42/42 MS: 1 CrossOver- 00:08:03.070 [2024-05-13 02:48:53.672880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.070 [2024-05-13 02:48:53.672912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.070 [2024-05-13 02:48:53.672978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.070 [2024-05-13 02:48:53.672994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.070 [2024-05-13 02:48:53.673048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:756053644923160831 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.070 [2024-05-13 02:48:53.673064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.070 #17 NEW cov: 12035 ft: 13631 corp: 4/169b lim: 120 exec/s: 0 rss: 70Mb L: 84/84 MS: 1 CopyPart- 00:08:03.070 [2024-05-13 02:48:53.713105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.070 [2024-05-13 02:48:53.713134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.713181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.713197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.713251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.713268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.713323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18377639814222104776 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.713339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.071 #18 NEW cov: 12120 ft: 14323 corp: 5/278b lim: 120 exec/s: 0 rss: 70Mb L: 109/109 MS: 1 InsertRepeatedBytes- 00:08:03.071 [2024-05-13 02:48:53.763289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.763316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.763368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.763388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.763440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.763455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.763509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.763523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.071 #21 NEW cov: 12120 ft: 14380 corp: 6/374b lim: 120 exec/s: 0 rss: 70Mb L: 96/109 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:08:03.071 [2024-05-13 02:48:53.803231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.803258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.803313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11719107999768421026 len:41635 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.803328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.071 [2024-05-13 02:48:53.803388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11719107999768421026 len:41635 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.803405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.071 #22 NEW cov: 12120 ft: 14443 corp: 7/456b lim: 120 exec/s: 0 rss: 70Mb L: 82/109 MS: 1 InsertRepeatedBytes- 00:08:03.071 [2024-05-13 02:48:53.853044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.071 [2024-05-13 02:48:53.853070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.328 #31 NEW cov: 12120 ft: 14638 corp: 8/489b lim: 120 exec/s: 0 rss: 70Mb L: 33/109 MS: 4 CopyPart-InsertByte-ChangeBinInt-CrossOver- 00:08:03.328 [2024-05-13 02:48:53.893210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.328 [2024-05-13 02:48:53.893239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.328 #32 NEW cov: 12120 ft: 14669 corp: 9/531b lim: 120 exec/s: 0 rss: 70Mb L: 42/109 MS: 1 ChangeByte- 00:08:03.329 [2024-05-13 02:48:53.933223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:53.933250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.329 #34 NEW cov: 12120 ft: 14745 corp: 10/560b lim: 120 exec/s: 0 rss: 71Mb L: 29/109 MS: 2 EraseBytes-CrossOver- 00:08:03.329 [2024-05-13 02:48:53.973832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:53.973859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:53.973922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:53.973938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:53.973992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:53.974007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:53.974062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18377639814222104776 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:53.974078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.329 #35 NEW cov: 12120 ft: 14795 corp: 11/671b lim: 120 exec/s: 0 rss: 71Mb L: 111/111 MS: 1 CMP- DE: "\000\016"- 00:08:03.329 [2024-05-13 02:48:54.023770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.023798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:54.023839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.023854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:54.023910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.023925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.329 #36 NEW cov: 12120 ft: 14814 corp: 12/751b lim: 120 exec/s: 0 rss: 71Mb L: 80/111 MS: 1 EraseBytes- 00:08:03.329 [2024-05-13 02:48:54.074090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.074117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:54.074179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034564246732800 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.074195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:54.074251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:720575943747881160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.074266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:54.074323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.074338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.329 #37 NEW cov: 12120 ft: 14825 corp: 13/847b lim: 120 exec/s: 0 rss: 71Mb L: 96/111 MS: 1 CrossOver- 00:08:03.329 [2024-05-13 02:48:54.123937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.123964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.329 [2024-05-13 02:48:54.123997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.329 [2024-05-13 02:48:54.124013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.587 #38 NEW cov: 12120 ft: 15128 corp: 14/908b lim: 120 exec/s: 0 rss: 71Mb L: 61/111 MS: 1 InsertRepeatedBytes- 00:08:03.587 [2024-05-13 02:48:54.164368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:51 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.164400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.164462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.164478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.164547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.164562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.164616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.164630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.587 #44 NEW cov: 12120 ft: 15139 corp: 15/1004b lim: 120 exec/s: 0 rss: 71Mb L: 96/111 MS: 1 ChangeByte- 00:08:03.587 [2024-05-13 02:48:54.204340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.204367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.204412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.204428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.204484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.204499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.587 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:03.587 #45 NEW cov: 12143 ft: 15185 corp: 16/1088b lim: 120 exec/s: 0 rss: 71Mb L: 84/111 MS: 1 EraseBytes- 00:08:03.587 [2024-05-13 02:48:54.244629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.244660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.244694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.244708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.244761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.244776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.244829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.244844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.587 #46 NEW cov: 12143 ft: 15191 corp: 17/1203b lim: 120 exec/s: 0 rss: 71Mb L: 115/115 MS: 1 InsertRepeatedBytes- 00:08:03.587 [2024-05-13 02:48:54.284294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.284320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.587 #47 NEW cov: 12143 ft: 15231 corp: 18/1245b lim: 120 exec/s: 47 rss: 71Mb L: 42/115 MS: 1 ShuffleBytes- 00:08:03.587 [2024-05-13 02:48:54.324514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.324542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.324574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.324590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.587 #48 NEW cov: 12143 ft: 15247 corp: 19/1306b lim: 120 exec/s: 48 rss: 71Mb L: 61/115 MS: 1 ChangeBinInt- 00:08:03.587 [2024-05-13 02:48:54.374684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.374711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.587 [2024-05-13 02:48:54.374763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.587 [2024-05-13 02:48:54.374779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.848 #49 NEW cov: 12143 ft: 15256 corp: 20/1367b lim: 120 exec/s: 49 rss: 71Mb L: 61/115 MS: 1 ChangeByte- 00:08:03.848 [2024-05-13 02:48:54.415070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.415098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.415161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.415177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.415229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.415250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.415303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18377639814222104776 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.415318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.848 #50 NEW cov: 12143 ft: 15324 corp: 21/1476b lim: 120 exec/s: 50 rss: 71Mb L: 109/115 MS: 1 ChangeBinInt- 00:08:03.848 [2024-05-13 02:48:54.455233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167784960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.455259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.455307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.455322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.455375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.455396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.455449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.455465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.848 #51 NEW cov: 12143 ft: 15335 corp: 22/1573b lim: 120 exec/s: 51 rss: 72Mb L: 97/115 MS: 1 InsertByte- 00:08:03.848 [2024-05-13 02:48:54.505369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.505399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.505450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034564246732800 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.505465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.505518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:720575943747881160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.505533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.505587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.505602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.848 #52 NEW cov: 12143 ft: 15372 corp: 23/1669b lim: 120 exec/s: 52 rss: 72Mb L: 96/115 MS: 1 ChangeBinInt- 00:08:03.848 [2024-05-13 02:48:54.555050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.555077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.848 #53 NEW cov: 12143 ft: 15390 corp: 24/1702b lim: 120 exec/s: 53 rss: 72Mb L: 33/115 MS: 1 ChangeBinInt- 00:08:03.848 [2024-05-13 02:48:54.595160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.595190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.848 #54 NEW cov: 12143 ft: 15401 corp: 25/1731b lim: 120 exec/s: 54 rss: 72Mb L: 29/115 MS: 1 CopyPart- 00:08:03.848 [2024-05-13 02:48:54.635747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.635774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.635816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.635831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.635879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:756053644923160831 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.635895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.848 [2024-05-13 02:48:54.635946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.848 [2024-05-13 02:48:54.635961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.166 #55 NEW cov: 12143 ft: 15410 corp: 26/1828b lim: 120 exec/s: 55 rss: 72Mb L: 97/115 MS: 1 CopyPart- 00:08:04.167 [2024-05-13 02:48:54.675920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.675949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.676008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14395694394794100935 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.676028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.676083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:2687 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.676102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.676160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.676178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.167 #56 NEW cov: 12143 ft: 15462 corp: 27/1931b lim: 120 exec/s: 56 rss: 72Mb L: 103/115 MS: 1 InsertRepeatedBytes- 00:08:04.167 [2024-05-13 02:48:54.725996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.726024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.726069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.726085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.726137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.726158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.726210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:14468094224711076040 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.726225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.167 #57 NEW cov: 12143 ft: 15516 corp: 28/2042b lim: 120 exec/s: 57 rss: 72Mb L: 111/115 MS: 1 PersAutoDict- DE: "\000\016"- 00:08:04.167 [2024-05-13 02:48:54.765771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167784960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.765798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.765848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.765863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.167 #58 NEW cov: 12143 ft: 15517 corp: 29/2112b lim: 120 exec/s: 58 rss: 72Mb L: 70/115 MS: 1 EraseBytes- 00:08:04.167 [2024-05-13 02:48:54.816281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14483576400697149640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.816310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.816359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.816375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.816439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.816455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.816508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.816522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.167 #59 NEW cov: 12143 ft: 15520 corp: 30/2222b lim: 120 exec/s: 59 rss: 72Mb L: 110/115 MS: 1 InsertRepeatedBytes- 00:08:04.167 [2024-05-13 02:48:54.866412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.866440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.866506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.866522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.866576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.866592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.866647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18377639814222104776 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.866663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.167 #60 NEW cov: 12143 ft: 15526 corp: 31/2331b lim: 120 exec/s: 60 rss: 72Mb L: 109/115 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:04.167 [2024-05-13 02:48:54.906200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:272674756493312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.906227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.167 [2024-05-13 02:48:54.906265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.167 [2024-05-13 02:48:54.906281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.167 #61 NEW cov: 12143 ft: 15603 corp: 32/2392b lim: 120 exec/s: 61 rss: 73Mb L: 61/115 MS: 1 ChangeBinInt- 00:08:04.428 [2024-05-13 02:48:54.956628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:54.956664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:54.956730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:56515756661145600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:54.956751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:54.956814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:14414333560721295560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:54.956836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:54.956903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:54.956924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.428 #62 NEW cov: 12143 ft: 15646 corp: 33/2489b lim: 120 exec/s: 62 rss: 73Mb L: 97/115 MS: 1 InsertByte- 00:08:04.428 [2024-05-13 02:48:55.006809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:55.006837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:55.006888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034564246732800 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:55.006904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:55.006955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:720575943747881160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:55.006970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:55.007024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:55.007039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.428 #63 NEW cov: 12143 ft: 15655 corp: 34/2585b lim: 120 exec/s: 63 rss: 73Mb L: 96/115 MS: 1 CopyPart- 00:08:04.428 [2024-05-13 02:48:55.046607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.428 [2024-05-13 02:48:55.046633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.428 [2024-05-13 02:48:55.046668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.046683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.429 #64 NEW cov: 12143 ft: 15659 corp: 35/2644b lim: 120 exec/s: 64 rss: 73Mb L: 59/115 MS: 1 EraseBytes- 00:08:04.429 [2024-05-13 02:48:55.086553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.086582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.429 #65 NEW cov: 12143 ft: 15711 corp: 36/2677b lim: 120 exec/s: 65 rss: 73Mb L: 33/115 MS: 1 ChangeByte- 00:08:04.429 [2024-05-13 02:48:55.127121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167784960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.127148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.429 [2024-05-13 02:48:55.127214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.127230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.429 [2024-05-13 02:48:55.127284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:36283883716608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.127299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.429 [2024-05-13 02:48:55.127355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.127370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.429 #66 NEW cov: 12143 ft: 15742 corp: 37/2774b lim: 120 exec/s: 66 rss: 73Mb L: 97/115 MS: 1 ChangeByte- 00:08:04.429 [2024-05-13 02:48:55.167157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.167185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.429 [2024-05-13 02:48:55.167219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.167236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.429 [2024-05-13 02:48:55.167288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.167303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.429 [2024-05-13 02:48:55.167358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18377639814222104776 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.167373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.429 #67 NEW cov: 12143 ft: 15755 corp: 38/2883b lim: 120 exec/s: 67 rss: 73Mb L: 109/115 MS: 1 ChangeByte- 00:08:04.429 [2024-05-13 02:48:55.216925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034568541685960 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.429 [2024-05-13 02:48:55.216952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.688 #68 NEW cov: 12143 ft: 15758 corp: 39/2925b lim: 120 exec/s: 68 rss: 73Mb L: 42/115 MS: 1 CrossOver- 00:08:04.688 [2024-05-13 02:48:55.257423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167784960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.688 [2024-05-13 02:48:55.257451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.688 [2024-05-13 02:48:55.257497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.688 [2024-05-13 02:48:55.257513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.688 [2024-05-13 02:48:55.257565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.688 [2024-05-13 02:48:55.257581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.688 [2024-05-13 02:48:55.257636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.688 [2024-05-13 02:48:55.257651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.688 #69 NEW cov: 12143 ft: 15767 corp: 40/3022b lim: 120 exec/s: 69 rss: 73Mb L: 97/115 MS: 1 PersAutoDict- DE: "\377\377\377~"- 00:08:04.688 [2024-05-13 02:48:55.297242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.688 [2024-05-13 02:48:55.297270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.688 [2024-05-13 02:48:55.297309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17152522134162508353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.688 [2024-05-13 02:48:55.297325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.688 #70 NEW cov: 12143 ft: 15768 corp: 41/3073b lim: 120 exec/s: 35 rss: 73Mb L: 51/115 MS: 1 CrossOver- 00:08:04.688 #70 DONE cov: 12143 ft: 15768 corp: 41/3073b lim: 120 exec/s: 35 rss: 73Mb 00:08:04.688 ###### Recommended dictionary. ###### 00:08:04.688 "\377\377\377~" # Uses: 1 00:08:04.688 "\000\016" # Uses: 1 00:08:04.688 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:04.688 ###### End of recommended dictionary. ###### 00:08:04.688 Done 70 runs in 2 second(s) 00:08:04.688 [2024-05-13 02:48:55.327055] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:04.688 02:48:55 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:04.688 [2024-05-13 02:48:55.487508] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:04.688 [2024-05-13 02:48:55.487597] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505605 ] 00:08:04.946 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.946 [2024-05-13 02:48:55.699417] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.946 [2024-05-13 02:48:55.737993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.204 [2024-05-13 02:48:55.769526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.204 [2024-05-13 02:48:55.821891] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.204 [2024-05-13 02:48:55.837855] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:05.204 [2024-05-13 02:48:55.838261] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:05.204 INFO: Running with entropic power schedule (0xFF, 100). 00:08:05.204 INFO: Seed: 237539996 00:08:05.204 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:05.204 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:05.204 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:05.204 INFO: A corpus is not provided, starting from an empty corpus 00:08:05.204 #2 INITED exec/s: 0 rss: 63Mb 00:08:05.204 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:05.204 This may also happen if the target rejected all inputs we tried so far 00:08:05.204 [2024-05-13 02:48:55.883427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.204 [2024-05-13 02:48:55.883456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.205 [2024-05-13 02:48:55.883496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.205 [2024-05-13 02:48:55.883511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.461 NEW_FUNC[1/685]: 0x4c1390 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:05.461 NEW_FUNC[2/685]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.461 #12 NEW cov: 11842 ft: 11843 corp: 2/54b lim: 100 exec/s: 0 rss: 70Mb L: 53/53 MS: 5 InsertByte-CopyPart-InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:05.461 [2024-05-13 02:48:56.194246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.461 [2024-05-13 02:48:56.194279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.461 [2024-05-13 02:48:56.194326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.461 [2024-05-13 02:48:56.194339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.461 [2024-05-13 02:48:56.194404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.462 [2024-05-13 02:48:56.194419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.462 #13 NEW cov: 11972 ft: 12724 corp: 3/124b lim: 100 exec/s: 0 rss: 70Mb L: 70/70 MS: 1 CopyPart- 00:08:05.462 [2024-05-13 02:48:56.244280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.462 [2024-05-13 02:48:56.244308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.462 [2024-05-13 02:48:56.244358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.462 [2024-05-13 02:48:56.244371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.462 [2024-05-13 02:48:56.244428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.462 [2024-05-13 02:48:56.244442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.718 #14 NEW cov: 11978 ft: 12939 corp: 4/195b lim: 100 exec/s: 0 rss: 70Mb L: 71/71 MS: 1 InsertByte- 00:08:05.718 [2024-05-13 02:48:56.294287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.718 [2024-05-13 02:48:56.294312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.718 [2024-05-13 02:48:56.294378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.718 [2024-05-13 02:48:56.294402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.718 #15 NEW cov: 12063 ft: 13345 corp: 5/249b lim: 100 exec/s: 0 rss: 70Mb L: 54/71 MS: 1 InsertByte- 00:08:05.718 [2024-05-13 02:48:56.334555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.718 [2024-05-13 02:48:56.334581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.718 [2024-05-13 02:48:56.334644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.718 [2024-05-13 02:48:56.334659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.718 [2024-05-13 02:48:56.334710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.718 [2024-05-13 02:48:56.334724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.718 #16 NEW cov: 12063 ft: 13390 corp: 6/319b lim: 100 exec/s: 0 rss: 70Mb L: 70/71 MS: 1 ChangeBit- 00:08:05.718 [2024-05-13 02:48:56.374650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.718 [2024-05-13 02:48:56.374677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.718 [2024-05-13 02:48:56.374710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.718 [2024-05-13 02:48:56.374728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.719 [2024-05-13 02:48:56.374781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.719 [2024-05-13 02:48:56.374798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.719 #17 NEW cov: 12063 ft: 13450 corp: 7/389b lim: 100 exec/s: 0 rss: 70Mb L: 70/71 MS: 1 ShuffleBytes- 00:08:05.719 [2024-05-13 02:48:56.424699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.719 [2024-05-13 02:48:56.424725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.719 [2024-05-13 02:48:56.424775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.719 [2024-05-13 02:48:56.424789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.719 #18 NEW cov: 12063 ft: 13547 corp: 8/443b lim: 100 exec/s: 0 rss: 70Mb L: 54/71 MS: 1 ChangeByte- 00:08:05.719 [2024-05-13 02:48:56.474818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.719 [2024-05-13 02:48:56.474845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.719 [2024-05-13 02:48:56.474892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.719 [2024-05-13 02:48:56.474907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.719 #19 NEW cov: 12063 ft: 13563 corp: 9/498b lim: 100 exec/s: 0 rss: 70Mb L: 55/71 MS: 1 InsertByte- 00:08:05.719 [2024-05-13 02:48:56.514882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.719 [2024-05-13 02:48:56.514907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.719 [2024-05-13 02:48:56.514956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.719 [2024-05-13 02:48:56.514970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.976 #20 NEW cov: 12063 ft: 13616 corp: 10/553b lim: 100 exec/s: 0 rss: 70Mb L: 55/71 MS: 1 InsertByte- 00:08:05.976 [2024-05-13 02:48:56.555165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.976 [2024-05-13 02:48:56.555191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.555240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.976 [2024-05-13 02:48:56.555255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.555308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.976 [2024-05-13 02:48:56.555322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.976 #21 NEW cov: 12063 ft: 13665 corp: 11/624b lim: 100 exec/s: 0 rss: 70Mb L: 71/71 MS: 1 InsertByte- 00:08:05.976 [2024-05-13 02:48:56.595222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.976 [2024-05-13 02:48:56.595248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.595293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.976 [2024-05-13 02:48:56.595306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.595361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.976 [2024-05-13 02:48:56.595377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.976 #22 NEW cov: 12063 ft: 13706 corp: 12/696b lim: 100 exec/s: 0 rss: 70Mb L: 72/72 MS: 1 CopyPart- 00:08:05.976 [2024-05-13 02:48:56.635341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.976 [2024-05-13 02:48:56.635366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.635435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.976 [2024-05-13 02:48:56.635450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.635503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.976 [2024-05-13 02:48:56.635518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.976 #28 NEW cov: 12063 ft: 13779 corp: 13/767b lim: 100 exec/s: 0 rss: 70Mb L: 71/72 MS: 1 CMP- DE: "\000\000"- 00:08:05.976 [2024-05-13 02:48:56.675561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.976 [2024-05-13 02:48:56.675586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.675633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.976 [2024-05-13 02:48:56.675646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.675697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.976 [2024-05-13 02:48:56.675713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.976 #29 NEW cov: 12063 ft: 13805 corp: 14/838b lim: 100 exec/s: 0 rss: 70Mb L: 71/72 MS: 1 ShuffleBytes- 00:08:05.976 [2024-05-13 02:48:56.715644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.976 [2024-05-13 02:48:56.715670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.715721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.976 [2024-05-13 02:48:56.715735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.976 [2024-05-13 02:48:56.715788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:05.977 [2024-05-13 02:48:56.715802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.977 #30 NEW cov: 12063 ft: 13838 corp: 15/910b lim: 100 exec/s: 0 rss: 70Mb L: 72/72 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:05.977 [2024-05-13 02:48:56.755630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:05.977 [2024-05-13 02:48:56.755655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.977 [2024-05-13 02:48:56.755690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:05.977 [2024-05-13 02:48:56.755704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.977 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:05.977 #31 NEW cov: 12086 ft: 13897 corp: 16/965b lim: 100 exec/s: 0 rss: 71Mb L: 55/72 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:06.235 [2024-05-13 02:48:56.795866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:56.795892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.795942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:56.795957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.796008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.235 [2024-05-13 02:48:56.796022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.235 #32 NEW cov: 12086 ft: 13918 corp: 17/1035b lim: 100 exec/s: 0 rss: 71Mb L: 70/72 MS: 1 CopyPart- 00:08:06.235 [2024-05-13 02:48:56.835829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:56.835854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.835888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:56.835901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 #33 NEW cov: 12086 ft: 13964 corp: 18/1083b lim: 100 exec/s: 0 rss: 71Mb L: 48/72 MS: 1 EraseBytes- 00:08:06.235 [2024-05-13 02:48:56.876049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:56.876075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.876123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:56.876136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.876189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.235 [2024-05-13 02:48:56.876203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.235 #34 NEW cov: 12086 ft: 13976 corp: 19/1154b lim: 100 exec/s: 34 rss: 71Mb L: 71/72 MS: 1 CMP- DE: "\377\377\377\372"- 00:08:06.235 [2024-05-13 02:48:56.916213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:56.916238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.916286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:56.916301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.916351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.235 [2024-05-13 02:48:56.916367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.235 #35 NEW cov: 12086 ft: 14008 corp: 20/1225b lim: 100 exec/s: 35 rss: 71Mb L: 71/72 MS: 1 ShuffleBytes- 00:08:06.235 [2024-05-13 02:48:56.956294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:56.956319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.956388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:56.956407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.956464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.235 [2024-05-13 02:48:56.956479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.235 #36 NEW cov: 12086 ft: 14033 corp: 21/1296b lim: 100 exec/s: 36 rss: 71Mb L: 71/72 MS: 1 ChangeBinInt- 00:08:06.235 [2024-05-13 02:48:56.996326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:56.996352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:56.996400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:56.996431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 #37 NEW cov: 12086 ft: 14045 corp: 22/1349b lim: 100 exec/s: 37 rss: 71Mb L: 53/72 MS: 1 EraseBytes- 00:08:06.235 [2024-05-13 02:48:57.036650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.235 [2024-05-13 02:48:57.036676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:57.036721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.235 [2024-05-13 02:48:57.036735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:57.036790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.235 [2024-05-13 02:48:57.036810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.235 [2024-05-13 02:48:57.036863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:06.235 [2024-05-13 02:48:57.036876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.493 #38 NEW cov: 12086 ft: 14316 corp: 23/1436b lim: 100 exec/s: 38 rss: 71Mb L: 87/87 MS: 1 CopyPart- 00:08:06.493 [2024-05-13 02:48:57.086567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.493 [2024-05-13 02:48:57.086592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.086642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.494 [2024-05-13 02:48:57.086657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.494 #39 NEW cov: 12086 ft: 14349 corp: 24/1492b lim: 100 exec/s: 39 rss: 72Mb L: 56/87 MS: 1 CrossOver- 00:08:06.494 [2024-05-13 02:48:57.126783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.494 [2024-05-13 02:48:57.126809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.126856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.494 [2024-05-13 02:48:57.126869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.126922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.494 [2024-05-13 02:48:57.126936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.494 #40 NEW cov: 12086 ft: 14366 corp: 25/1562b lim: 100 exec/s: 40 rss: 72Mb L: 70/87 MS: 1 CopyPart- 00:08:06.494 [2024-05-13 02:48:57.166736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.494 [2024-05-13 02:48:57.166762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.166815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.494 [2024-05-13 02:48:57.166830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.494 #41 NEW cov: 12086 ft: 14388 corp: 26/1618b lim: 100 exec/s: 41 rss: 72Mb L: 56/87 MS: 1 InsertByte- 00:08:06.494 [2024-05-13 02:48:57.206890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.494 [2024-05-13 02:48:57.206915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.206958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.494 [2024-05-13 02:48:57.206973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.494 #42 NEW cov: 12086 ft: 14399 corp: 27/1667b lim: 100 exec/s: 42 rss: 72Mb L: 49/87 MS: 1 InsertByte- 00:08:06.494 [2024-05-13 02:48:57.247132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.494 [2024-05-13 02:48:57.247157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.247191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.494 [2024-05-13 02:48:57.247205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.247258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.494 [2024-05-13 02:48:57.247273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.494 #43 NEW cov: 12086 ft: 14435 corp: 28/1737b lim: 100 exec/s: 43 rss: 72Mb L: 70/87 MS: 1 CopyPart- 00:08:06.494 [2024-05-13 02:48:57.297279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.494 [2024-05-13 02:48:57.297307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.297354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.494 [2024-05-13 02:48:57.297370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.494 [2024-05-13 02:48:57.297433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.752 [2024-05-13 02:48:57.297449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.753 #44 NEW cov: 12086 ft: 14447 corp: 29/1808b lim: 100 exec/s: 44 rss: 72Mb L: 71/87 MS: 1 CopyPart- 00:08:06.753 [2024-05-13 02:48:57.347271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.753 [2024-05-13 02:48:57.347295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.347343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.753 [2024-05-13 02:48:57.347358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.753 #45 NEW cov: 12086 ft: 14454 corp: 30/1852b lim: 100 exec/s: 45 rss: 72Mb L: 44/87 MS: 1 EraseBytes- 00:08:06.753 [2024-05-13 02:48:57.387655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.753 [2024-05-13 02:48:57.387683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.387745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.753 [2024-05-13 02:48:57.387760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.387810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.753 [2024-05-13 02:48:57.387824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.387874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:06.753 [2024-05-13 02:48:57.387888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.753 #49 NEW cov: 12086 ft: 14485 corp: 31/1945b lim: 100 exec/s: 49 rss: 72Mb L: 93/93 MS: 4 CopyPart-CopyPart-ChangeByte-InsertRepeatedBytes- 00:08:06.753 [2024-05-13 02:48:57.427627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.753 [2024-05-13 02:48:57.427652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.427703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.753 [2024-05-13 02:48:57.427719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.427771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.753 [2024-05-13 02:48:57.427786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.753 #50 NEW cov: 12086 ft: 14494 corp: 32/2015b lim: 100 exec/s: 50 rss: 72Mb L: 70/93 MS: 1 ShuffleBytes- 00:08:06.753 [2024-05-13 02:48:57.477650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.753 [2024-05-13 02:48:57.477675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.477710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.753 [2024-05-13 02:48:57.477725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.753 #51 NEW cov: 12086 ft: 14497 corp: 33/2070b lim: 100 exec/s: 51 rss: 72Mb L: 55/93 MS: 1 ChangeBit- 00:08:06.753 [2024-05-13 02:48:57.517898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:06.753 [2024-05-13 02:48:57.517924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.517975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:06.753 [2024-05-13 02:48:57.517990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.753 [2024-05-13 02:48:57.518041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:06.753 [2024-05-13 02:48:57.518056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.753 #52 NEW cov: 12086 ft: 14509 corp: 34/2149b lim: 100 exec/s: 52 rss: 72Mb L: 79/93 MS: 1 CMP- DE: "\205\251\020\332\2510\204\000"- 00:08:07.012 [2024-05-13 02:48:57.558022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.558049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.558107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.558123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.558175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.012 [2024-05-13 02:48:57.558192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.012 #53 NEW cov: 12086 ft: 14521 corp: 35/2221b lim: 100 exec/s: 53 rss: 72Mb L: 72/93 MS: 1 CopyPart- 00:08:07.012 [2024-05-13 02:48:57.598109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.598135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.598181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.598196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.598248] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.012 [2024-05-13 02:48:57.598261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.012 #54 NEW cov: 12086 ft: 14528 corp: 36/2292b lim: 100 exec/s: 54 rss: 72Mb L: 71/93 MS: 1 ShuffleBytes- 00:08:07.012 [2024-05-13 02:48:57.638223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.638249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.638290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.638304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.638354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.012 [2024-05-13 02:48:57.638368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.012 #55 NEW cov: 12086 ft: 14540 corp: 37/2363b lim: 100 exec/s: 55 rss: 72Mb L: 71/93 MS: 1 ChangeBit- 00:08:07.012 [2024-05-13 02:48:57.678222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.678247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.678295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.678310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 #56 NEW cov: 12086 ft: 14551 corp: 38/2418b lim: 100 exec/s: 56 rss: 72Mb L: 55/93 MS: 1 ChangeBit- 00:08:07.012 [2024-05-13 02:48:57.718497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.718523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.718570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.718582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.718634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.012 [2024-05-13 02:48:57.718652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.012 #57 NEW cov: 12086 ft: 14553 corp: 39/2493b lim: 100 exec/s: 57 rss: 72Mb L: 75/93 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:07.012 [2024-05-13 02:48:57.758566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.758591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.758639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.758652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.758706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.012 [2024-05-13 02:48:57.758720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.012 #58 NEW cov: 12086 ft: 14669 corp: 40/2564b lim: 100 exec/s: 58 rss: 72Mb L: 71/93 MS: 1 InsertByte- 00:08:07.012 [2024-05-13 02:48:57.798698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.012 [2024-05-13 02:48:57.798724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.798771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.012 [2024-05-13 02:48:57.798784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.012 [2024-05-13 02:48:57.798838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.012 [2024-05-13 02:48:57.798852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.271 #59 NEW cov: 12086 ft: 14695 corp: 41/2634b lim: 100 exec/s: 59 rss: 73Mb L: 70/93 MS: 1 ShuffleBytes- 00:08:07.271 [2024-05-13 02:48:57.848706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.271 [2024-05-13 02:48:57.848731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.271 [2024-05-13 02:48:57.848762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.271 [2024-05-13 02:48:57.848776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.271 #60 NEW cov: 12086 ft: 14699 corp: 42/2687b lim: 100 exec/s: 60 rss: 73Mb L: 53/93 MS: 1 EraseBytes- 00:08:07.271 [2024-05-13 02:48:57.888964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:07.271 [2024-05-13 02:48:57.888989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.271 [2024-05-13 02:48:57.889053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:07.271 [2024-05-13 02:48:57.889066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.271 [2024-05-13 02:48:57.889121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:07.271 [2024-05-13 02:48:57.889136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.271 #61 NEW cov: 12086 ft: 14707 corp: 43/2758b lim: 100 exec/s: 30 rss: 73Mb L: 71/93 MS: 1 ChangeBit- 00:08:07.271 #61 DONE cov: 12086 ft: 14707 corp: 43/2758b lim: 100 exec/s: 30 rss: 73Mb 00:08:07.271 ###### Recommended dictionary. ###### 00:08:07.271 "\000\000" # Uses: 2 00:08:07.271 "\377\377\377\372" # Uses: 0 00:08:07.271 "\205\251\020\332\2510\204\000" # Uses: 0 00:08:07.271 "\000\000\000\000" # Uses: 0 00:08:07.271 ###### End of recommended dictionary. ###### 00:08:07.271 Done 61 runs in 2 second(s) 00:08:07.271 [2024-05-13 02:48:57.908973] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:07.271 02:48:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:07.271 [2024-05-13 02:48:58.069897] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:07.271 [2024-05-13 02:48:58.069978] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505918 ] 00:08:07.529 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.529 [2024-05-13 02:48:58.288890] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:07.529 [2024-05-13 02:48:58.327402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.788 [2024-05-13 02:48:58.356674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.788 [2024-05-13 02:48:58.409125] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.788 [2024-05-13 02:48:58.425086] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:07.788 [2024-05-13 02:48:58.425496] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:07.788 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.788 INFO: Seed: 2824561178 00:08:07.788 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:07.788 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:07.788 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:07.788 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.788 #2 INITED exec/s: 0 rss: 63Mb 00:08:07.788 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.788 This may also happen if the target rejected all inputs we tried so far 00:08:07.788 [2024-05-13 02:48:58.470575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:07.788 [2024-05-13 02:48:58.470606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.788 [2024-05-13 02:48:58.470654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:07.788 [2024-05-13 02:48:58.470670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.047 NEW_FUNC[1/684]: 0x4c4350 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:08.047 NEW_FUNC[2/684]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:08.047 #3 NEW cov: 11811 ft: 11813 corp: 2/30b lim: 50 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:08:08.047 [2024-05-13 02:48:58.781478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:08.047 [2024-05-13 02:48:58.781514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.047 [2024-05-13 02:48:58.781581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:08.048 [2024-05-13 02:48:58.781598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.048 NEW_FUNC[1/1]: 0x1a3cde0 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:712 00:08:08.048 #6 NEW cov: 11950 ft: 12299 corp: 3/59b lim: 50 exec/s: 0 rss: 70Mb L: 29/29 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:08.048 [2024-05-13 02:48:58.821445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.048 [2024-05-13 02:48:58.821475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.048 [2024-05-13 02:48:58.821519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:08.048 [2024-05-13 02:48:58.821536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.048 #7 NEW cov: 11956 ft: 12614 corp: 4/82b lim: 50 exec/s: 0 rss: 70Mb L: 23/29 MS: 1 CrossOver- 00:08:08.307 [2024-05-13 02:48:58.861591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13304101901659253667 len:33793 00:08:08.307 [2024-05-13 02:48:58.861622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.861664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:08.307 [2024-05-13 02:48:58.861680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.307 #8 NEW cov: 12041 ft: 12940 corp: 5/111b lim: 50 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 CMP- DE: "\253\243\270\241\2520\204\000"- 00:08:08.307 [2024-05-13 02:48:58.911705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13304101901659253667 len:34305 00:08:08.307 [2024-05-13 02:48:58.911733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.911798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:08.307 [2024-05-13 02:48:58.911814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.307 #9 NEW cov: 12041 ft: 13109 corp: 6/140b lim: 50 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 ChangeBit- 00:08:08.307 [2024-05-13 02:48:58.951887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:08.307 [2024-05-13 02:48:58.951915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.951947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:08.307 [2024-05-13 02:48:58.951963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.952019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65291 00:08:08.307 [2024-05-13 02:48:58.952035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.307 #15 NEW cov: 12041 ft: 13444 corp: 7/170b lim: 50 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 InsertByte- 00:08:08.307 [2024-05-13 02:48:58.992124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.307 [2024-05-13 02:48:58.992151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.992213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:08:08.307 [2024-05-13 02:48:58.992229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.992285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:08.307 [2024-05-13 02:48:58.992301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:58.992355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069414584575 len:65536 00:08:08.307 [2024-05-13 02:48:58.992370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.307 #16 NEW cov: 12041 ft: 13742 corp: 8/215b lim: 50 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:08:08.307 [2024-05-13 02:48:59.042144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13304101901659253667 len:34305 00:08:08.307 [2024-05-13 02:48:59.042172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:59.042221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071478181887 len:65536 00:08:08.307 [2024-05-13 02:48:59.042238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:59.042295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65291 00:08:08.307 [2024-05-13 02:48:59.042312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.307 #17 NEW cov: 12041 ft: 13781 corp: 9/245b lim: 50 exec/s: 0 rss: 70Mb L: 30/45 MS: 1 InsertByte- 00:08:08.307 [2024-05-13 02:48:59.092394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.307 [2024-05-13 02:48:59.092424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:59.092465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:08:08.307 [2024-05-13 02:48:59.092480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:59.092534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:08.307 [2024-05-13 02:48:59.092550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.307 [2024-05-13 02:48:59.092605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11797356975413526699 len:12421 00:08:08.307 [2024-05-13 02:48:59.092619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.567 #18 NEW cov: 12041 ft: 13816 corp: 10/290b lim: 50 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 PersAutoDict- DE: "\253\243\270\241\2520\204\000"- 00:08:08.567 [2024-05-13 02:48:59.142538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.567 [2024-05-13 02:48:59.142566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.142620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:08.567 [2024-05-13 02:48:59.142636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.142689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:08.567 [2024-05-13 02:48:59.142705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.142757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:792633534417207295 len:65536 00:08:08.567 [2024-05-13 02:48:59.142773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.567 #19 NEW cov: 12041 ft: 13862 corp: 11/335b lim: 50 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 CrossOver- 00:08:08.567 [2024-05-13 02:48:59.182444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.567 [2024-05-13 02:48:59.182471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.182506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1099494850560 len:65536 00:08:08.567 [2024-05-13 02:48:59.182522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.567 #20 NEW cov: 12041 ft: 13885 corp: 12/358b lim: 50 exec/s: 0 rss: 70Mb L: 23/45 MS: 1 CrossOver- 00:08:08.567 [2024-05-13 02:48:59.222863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4278190080 len:1 00:08:08.567 [2024-05-13 02:48:59.222890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.222951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18422998711358455808 len:41387 00:08:08.567 [2024-05-13 02:48:59.222967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.223021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744070228672767 len:65536 00:08:08.567 [2024-05-13 02:48:59.223039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.223095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:08.567 [2024-05-13 02:48:59.223110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.567 #21 NEW cov: 12041 ft: 13906 corp: 13/400b lim: 50 exec/s: 0 rss: 70Mb L: 42/45 MS: 1 InsertRepeatedBytes- 00:08:08.567 [2024-05-13 02:48:59.262771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:08.567 [2024-05-13 02:48:59.262798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.262856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:12263446923744360609 len:65536 00:08:08.567 [2024-05-13 02:48:59.262873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.262927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:08.567 [2024-05-13 02:48:59.262942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.567 #22 NEW cov: 12041 ft: 13923 corp: 14/437b lim: 50 exec/s: 0 rss: 70Mb L: 37/45 MS: 1 PersAutoDict- DE: "\253\243\270\241\2520\204\000"- 00:08:08.567 [2024-05-13 02:48:59.313022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.567 [2024-05-13 02:48:59.313050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.313112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:764487401472 len:1 00:08:08.567 [2024-05-13 02:48:59.313129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.313183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:08.567 [2024-05-13 02:48:59.313199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.313252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1095216660480 len:65536 00:08:08.567 [2024-05-13 02:48:59.313268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.567 #23 NEW cov: 12041 ft: 13954 corp: 15/486b lim: 50 exec/s: 0 rss: 70Mb L: 49/49 MS: 1 CMP- DE: "\000\000\000\261"- 00:08:08.567 [2024-05-13 02:48:59.352983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13304101901659253667 len:34305 00:08:08.567 [2024-05-13 02:48:59.353011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.567 [2024-05-13 02:48:59.353060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071478181887 len:65536 00:08:08.567 [2024-05-13 02:48:59.353077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.826 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:08.826 #24 NEW cov: 12064 ft: 14002 corp: 16/511b lim: 50 exec/s: 0 rss: 70Mb L: 25/49 MS: 1 EraseBytes- 00:08:08.826 [2024-05-13 02:48:59.403205] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 00:08:08.826 [2024-05-13 02:48:59.403232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.403265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17795682518166861558 len:63223 00:08:08.827 [2024-05-13 02:48:59.403281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.403338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17795682518166861558 len:63223 00:08:08.827 [2024-05-13 02:48:59.403354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.827 #26 NEW cov: 12064 ft: 14003 corp: 17/549b lim: 50 exec/s: 0 rss: 70Mb L: 38/49 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:08.827 [2024-05-13 02:48:59.443263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:08.827 [2024-05-13 02:48:59.443292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.443342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:764487401472 len:1 00:08:08.827 [2024-05-13 02:48:59.443358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.443415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:08.827 [2024-05-13 02:48:59.443433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.827 #27 NEW cov: 12064 ft: 14017 corp: 18/579b lim: 50 exec/s: 27 rss: 70Mb L: 30/49 MS: 1 EraseBytes- 00:08:08.827 [2024-05-13 02:48:59.493471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13260473280269101987 len:34305 00:08:08.827 [2024-05-13 02:48:59.493498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.493556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071478181887 len:65536 00:08:08.827 [2024-05-13 02:48:59.493572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.493626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65291 00:08:08.827 [2024-05-13 02:48:59.493642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.827 #28 NEW cov: 12064 ft: 14046 corp: 19/609b lim: 50 exec/s: 28 rss: 70Mb L: 30/49 MS: 1 ChangeByte- 00:08:08.827 [2024-05-13 02:48:59.533547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742974382473215 len:65536 00:08:08.827 [2024-05-13 02:48:59.533577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.533625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294901760 len:45313 00:08:08.827 [2024-05-13 02:48:59.533642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.533698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744069431361535 len:65536 00:08:08.827 [2024-05-13 02:48:59.533715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.827 #29 NEW cov: 12064 ft: 14056 corp: 20/640b lim: 50 exec/s: 29 rss: 70Mb L: 31/49 MS: 1 CopyPart- 00:08:08.827 [2024-05-13 02:48:59.583718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 00:08:08.827 [2024-05-13 02:48:59.583746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.583779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17795682518166861558 len:63223 00:08:08.827 [2024-05-13 02:48:59.583795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.827 [2024-05-13 02:48:59.583850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17795682518166861558 len:63223 00:08:08.827 [2024-05-13 02:48:59.583867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.827 #30 NEW cov: 12064 ft: 14104 corp: 21/678b lim: 50 exec/s: 30 rss: 70Mb L: 38/49 MS: 1 CopyPart- 00:08:09.086 [2024-05-13 02:48:59.633734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:281474976648448 len:65536 00:08:09.086 [2024-05-13 02:48:59.633763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.086 [2024-05-13 02:48:59.633803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:09.086 [2024-05-13 02:48:59.633819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.086 #31 NEW cov: 12064 ft: 14129 corp: 22/707b lim: 50 exec/s: 31 rss: 70Mb L: 29/49 MS: 1 CMP- DE: "\015\000\000\000"- 00:08:09.086 [2024-05-13 02:48:59.673893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 00:08:09.086 [2024-05-13 02:48:59.673920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.086 [2024-05-13 02:48:59.673963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17798225727216613110 len:65536 00:08:09.086 [2024-05-13 02:48:59.673979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.086 [2024-05-13 02:48:59.674034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:4294967040 len:256 00:08:09.086 [2024-05-13 02:48:59.674050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.086 #32 NEW cov: 12064 ft: 14228 corp: 23/745b lim: 50 exec/s: 32 rss: 70Mb L: 38/49 MS: 1 CrossOver- 00:08:09.086 [2024-05-13 02:48:59.723923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17729253367101191926 len:1 00:08:09.086 [2024-05-13 02:48:59.723951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.086 #34 NEW cov: 12064 ft: 14528 corp: 24/758b lim: 50 exec/s: 34 rss: 71Mb L: 13/49 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:09.086 [2024-05-13 02:48:59.764204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743193241255935 len:65536 00:08:09.086 [2024-05-13 02:48:59.764231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.086 [2024-05-13 02:48:59.764267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:09.087 [2024-05-13 02:48:59.764283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.764343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65291 00:08:09.087 [2024-05-13 02:48:59.764359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.087 #35 NEW cov: 12064 ft: 14533 corp: 25/788b lim: 50 exec/s: 35 rss: 71Mb L: 30/49 MS: 1 InsertByte- 00:08:09.087 [2024-05-13 02:48:59.804440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742974382473215 len:65536 00:08:09.087 [2024-05-13 02:48:59.804469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.804514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294901760 len:45313 00:08:09.087 [2024-05-13 02:48:59.804530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.804583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17795682514040258550 len:2807 00:08:09.087 [2024-05-13 02:48:59.804598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.804652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069414584320 len:65536 00:08:09.087 [2024-05-13 02:48:59.804669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.087 #36 NEW cov: 12064 ft: 14543 corp: 26/830b lim: 50 exec/s: 36 rss: 71Mb L: 42/49 MS: 1 CrossOver- 00:08:09.087 [2024-05-13 02:48:59.854672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.087 [2024-05-13 02:48:59.854699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.854750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:08:09.087 [2024-05-13 02:48:59.854766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.854822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2113929216 len:1 00:08:09.087 [2024-05-13 02:48:59.854838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.087 [2024-05-13 02:48:59.854896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069414584575 len:65536 00:08:09.087 [2024-05-13 02:48:59.854913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.087 #37 NEW cov: 12064 ft: 14553 corp: 27/875b lim: 50 exec/s: 37 rss: 71Mb L: 45/49 MS: 1 ChangeByte- 00:08:09.346 [2024-05-13 02:48:59.894533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.346 [2024-05-13 02:48:59.894562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:48:59.894594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:09.346 [2024-05-13 02:48:59.894610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.346 #38 NEW cov: 12064 ft: 14565 corp: 28/898b lim: 50 exec/s: 38 rss: 71Mb L: 23/49 MS: 1 CopyPart- 00:08:09.346 [2024-05-13 02:48:59.934806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.346 [2024-05-13 02:48:59.934836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:48:59.934872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686483966590975 len:1 00:08:09.346 [2024-05-13 02:48:59.934887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:48:59.934942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9079256848778919936 len:1 00:08:09.346 [2024-05-13 02:48:59.934958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:48:59.935010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1095216660480 len:65536 00:08:09.346 [2024-05-13 02:48:59.935026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.346 #39 NEW cov: 12064 ft: 14573 corp: 29/947b lim: 50 exec/s: 39 rss: 71Mb L: 49/49 MS: 1 CopyPart- 00:08:09.346 [2024-05-13 02:48:59.984732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.346 [2024-05-13 02:48:59.984760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:48:59.984793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65294 00:08:09.346 [2024-05-13 02:48:59.984809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.346 #40 NEW cov: 12064 ft: 14580 corp: 30/974b lim: 50 exec/s: 40 rss: 71Mb L: 27/49 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:08:09.346 [2024-05-13 02:49:00.035141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069414584497 len:65536 00:08:09.346 [2024-05-13 02:49:00.035170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:49:00.035226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:08:09.346 [2024-05-13 02:49:00.035242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:49:00.035295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:08:09.346 [2024-05-13 02:49:00.035311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:49:00.035366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11797356975413526699 len:12421 00:08:09.346 [2024-05-13 02:49:00.035385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.346 #41 NEW cov: 12064 ft: 14596 corp: 31/1019b lim: 50 exec/s: 41 rss: 71Mb L: 45/49 MS: 1 PersAutoDict- DE: "\000\000\000\261"- 00:08:09.346 [2024-05-13 02:49:00.085213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17795682518166861558 len:63223 00:08:09.346 [2024-05-13 02:49:00.085245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:49:00.085279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17795682518167254774 len:63223 00:08:09.346 [2024-05-13 02:49:00.085295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.346 [2024-05-13 02:49:00.085355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17795682518166861558 len:63223 00:08:09.347 [2024-05-13 02:49:00.085372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.347 #42 NEW cov: 12064 ft: 14617 corp: 32/1057b lim: 50 exec/s: 42 rss: 71Mb L: 38/49 MS: 1 ChangeBinInt- 00:08:09.347 [2024-05-13 02:49:00.125066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17729253521720014582 len:1 00:08:09.347 [2024-05-13 02:49:00.125096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.605 #43 NEW cov: 12064 ft: 14627 corp: 33/1070b lim: 50 exec/s: 43 rss: 71Mb L: 13/49 MS: 1 ChangeByte- 00:08:09.605 [2024-05-13 02:49:00.175240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.605 [2024-05-13 02:49:00.175268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.605 #44 NEW cov: 12064 ft: 14699 corp: 34/1087b lim: 50 exec/s: 44 rss: 71Mb L: 17/49 MS: 1 EraseBytes- 00:08:09.605 [2024-05-13 02:49:00.215694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4278190080 len:1 00:08:09.605 [2024-05-13 02:49:00.215722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.605 [2024-05-13 02:49:00.215770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18422998711358455808 len:41387 00:08:09.605 [2024-05-13 02:49:00.215786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.605 [2024-05-13 02:49:00.215839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744070228672767 len:65536 00:08:09.605 [2024-05-13 02:49:00.215855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.605 [2024-05-13 02:49:00.215911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:09.605 [2024-05-13 02:49:00.215928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.606 #45 NEW cov: 12064 ft: 14735 corp: 35/1129b lim: 50 exec/s: 45 rss: 71Mb L: 42/49 MS: 1 ShuffleBytes- 00:08:09.606 [2024-05-13 02:49:00.265705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:09.606 [2024-05-13 02:49:00.265732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.606 [2024-05-13 02:49:00.265773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11649176732520658982 len:256 00:08:09.606 [2024-05-13 02:49:00.265789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.606 [2024-05-13 02:49:00.265845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:09.606 [2024-05-13 02:49:00.265860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.606 #46 NEW cov: 12064 ft: 14801 corp: 36/1167b lim: 50 exec/s: 46 rss: 71Mb L: 38/49 MS: 1 InsertByte- 00:08:09.606 [2024-05-13 02:49:00.315745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.606 [2024-05-13 02:49:00.315772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.606 [2024-05-13 02:49:00.315829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:3329 00:08:09.606 [2024-05-13 02:49:00.315846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.606 #47 NEW cov: 12064 ft: 14878 corp: 37/1193b lim: 50 exec/s: 47 rss: 72Mb L: 26/49 MS: 1 EraseBytes- 00:08:09.606 [2024-05-13 02:49:00.365787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17729253521720014582 len:281 00:08:09.606 [2024-05-13 02:49:00.365815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.606 #48 NEW cov: 12064 ft: 14898 corp: 38/1206b lim: 50 exec/s: 48 rss: 72Mb L: 13/49 MS: 1 CMP- DE: "\001\030"- 00:08:09.865 [2024-05-13 02:49:00.415941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13261589808557304739 len:12423 00:08:09.865 [2024-05-13 02:49:00.415969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.865 #49 NEW cov: 12064 ft: 14985 corp: 39/1217b lim: 50 exec/s: 49 rss: 72Mb L: 11/49 MS: 1 CrossOver- 00:08:09.865 [2024-05-13 02:49:00.456424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:08:09.865 [2024-05-13 02:49:00.456452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.865 [2024-05-13 02:49:00.456502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1 00:08:09.865 [2024-05-13 02:49:00.456519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.865 [2024-05-13 02:49:00.456572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2113929216 len:1 00:08:09.865 [2024-05-13 02:49:00.456589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.865 [2024-05-13 02:49:00.456646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069414584575 len:65536 00:08:09.865 [2024-05-13 02:49:00.456662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.865 #50 NEW cov: 12064 ft: 14989 corp: 40/1261b lim: 50 exec/s: 25 rss: 72Mb L: 44/49 MS: 1 EraseBytes- 00:08:09.865 #50 DONE cov: 12064 ft: 14989 corp: 40/1261b lim: 50 exec/s: 25 rss: 72Mb 00:08:09.865 ###### Recommended dictionary. ###### 00:08:09.865 "\253\243\270\241\2520\204\000" # Uses: 3 00:08:09.865 "\000\000\000\261" # Uses: 1 00:08:09.865 "\015\000\000\000" # Uses: 1 00:08:09.865 "\001\030" # Uses: 0 00:08:09.865 ###### End of recommended dictionary. ###### 00:08:09.865 Done 50 runs in 2 second(s) 00:08:09.865 [2024-05-13 02:49:00.478272] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.866 02:49:00 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:09.866 [2024-05-13 02:49:00.640938] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:09.866 [2024-05-13 02:49:00.641033] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506426 ] 00:08:10.125 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.125 [2024-05-13 02:49:00.852492] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.125 [2024-05-13 02:49:00.891291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.125 [2024-05-13 02:49:00.922058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.384 [2024-05-13 02:49:00.974274] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.384 [2024-05-13 02:49:00.990236] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:10.384 [2024-05-13 02:49:00.990649] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:10.384 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.384 INFO: Seed: 1092567381 00:08:10.384 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:10.384 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:10.384 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:10.384 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.384 #2 INITED exec/s: 0 rss: 63Mb 00:08:10.384 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.384 This may also happen if the target rejected all inputs we tried so far 00:08:10.384 [2024-05-13 02:49:01.038818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.384 [2024-05-13 02:49:01.038847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.384 [2024-05-13 02:49:01.038915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.384 [2024-05-13 02:49:01.038931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.643 NEW_FUNC[1/687]: 0x4c5f10 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:10.643 NEW_FUNC[2/687]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:10.643 #3 NEW cov: 11872 ft: 11878 corp: 2/39b lim: 90 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:10.643 [2024-05-13 02:49:01.349539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.643 [2024-05-13 02:49:01.349576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.643 [2024-05-13 02:49:01.349648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.643 [2024-05-13 02:49:01.349664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.643 #8 NEW cov: 12008 ft: 12515 corp: 3/83b lim: 90 exec/s: 0 rss: 70Mb L: 44/44 MS: 5 CrossOver-InsertByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:08:10.643 [2024-05-13 02:49:01.389645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.643 [2024-05-13 02:49:01.389673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.643 [2024-05-13 02:49:01.389738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.643 [2024-05-13 02:49:01.389753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.643 #9 NEW cov: 12014 ft: 12805 corp: 4/127b lim: 90 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ChangeBinInt- 00:08:10.643 [2024-05-13 02:49:01.429749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.643 [2024-05-13 02:49:01.429776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.643 [2024-05-13 02:49:01.429824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.643 [2024-05-13 02:49:01.429841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.902 #15 NEW cov: 12099 ft: 13018 corp: 5/171b lim: 90 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 CopyPart- 00:08:10.902 [2024-05-13 02:49:01.479886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.902 [2024-05-13 02:49:01.479913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.902 [2024-05-13 02:49:01.479946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.902 [2024-05-13 02:49:01.479961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.902 #16 NEW cov: 12099 ft: 13114 corp: 6/209b lim: 90 exec/s: 0 rss: 70Mb L: 38/44 MS: 1 CopyPart- 00:08:10.902 [2024-05-13 02:49:01.520000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.902 [2024-05-13 02:49:01.520027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.902 [2024-05-13 02:49:01.520083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.902 [2024-05-13 02:49:01.520100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.902 #17 NEW cov: 12099 ft: 13181 corp: 7/253b lim: 90 exec/s: 0 rss: 70Mb L: 44/44 MS: 1 ShuffleBytes- 00:08:10.902 [2024-05-13 02:49:01.560127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.902 [2024-05-13 02:49:01.560155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.902 [2024-05-13 02:49:01.560198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.902 [2024-05-13 02:49:01.560214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.902 #18 NEW cov: 12099 ft: 13281 corp: 8/298b lim: 90 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 InsertByte- 00:08:10.902 [2024-05-13 02:49:01.610518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.902 [2024-05-13 02:49:01.610546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.902 [2024-05-13 02:49:01.610607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.902 [2024-05-13 02:49:01.610623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.902 [2024-05-13 02:49:01.610675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.902 [2024-05-13 02:49:01.610690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.902 [2024-05-13 02:49:01.610745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.902 [2024-05-13 02:49:01.610760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.903 #19 NEW cov: 12099 ft: 13726 corp: 9/376b lim: 90 exec/s: 0 rss: 70Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:08:10.903 [2024-05-13 02:49:01.650638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.903 [2024-05-13 02:49:01.650664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.903 [2024-05-13 02:49:01.650728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.903 [2024-05-13 02:49:01.650744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.903 [2024-05-13 02:49:01.650794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.903 [2024-05-13 02:49:01.650809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.903 [2024-05-13 02:49:01.650860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:10.903 [2024-05-13 02:49:01.650875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.903 #20 NEW cov: 12099 ft: 13754 corp: 10/450b lim: 90 exec/s: 0 rss: 71Mb L: 74/78 MS: 1 InsertRepeatedBytes- 00:08:10.903 [2024-05-13 02:49:01.700657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:10.903 [2024-05-13 02:49:01.700684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.903 [2024-05-13 02:49:01.700736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:10.903 [2024-05-13 02:49:01.700750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.903 [2024-05-13 02:49:01.700805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:10.903 [2024-05-13 02:49:01.700820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.161 #21 NEW cov: 12099 ft: 14076 corp: 11/518b lim: 90 exec/s: 0 rss: 71Mb L: 68/78 MS: 1 CrossOver- 00:08:11.161 [2024-05-13 02:49:01.750588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.161 [2024-05-13 02:49:01.750615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.750678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.161 [2024-05-13 02:49:01.750694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.161 #22 NEW cov: 12099 ft: 14093 corp: 12/556b lim: 90 exec/s: 0 rss: 71Mb L: 38/78 MS: 1 ChangeBit- 00:08:11.161 [2024-05-13 02:49:01.791069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.161 [2024-05-13 02:49:01.791096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.791161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.161 [2024-05-13 02:49:01.791176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.791231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.161 [2024-05-13 02:49:01.791246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.791302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.161 [2024-05-13 02:49:01.791317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.161 #23 NEW cov: 12099 ft: 14120 corp: 13/628b lim: 90 exec/s: 0 rss: 71Mb L: 72/78 MS: 1 InsertRepeatedBytes- 00:08:11.161 [2024-05-13 02:49:01.830892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.161 [2024-05-13 02:49:01.830919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.830969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.161 [2024-05-13 02:49:01.830985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.161 #24 NEW cov: 12099 ft: 14141 corp: 14/673b lim: 90 exec/s: 0 rss: 71Mb L: 45/78 MS: 1 ChangeBinInt- 00:08:11.161 [2024-05-13 02:49:01.871284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.161 [2024-05-13 02:49:01.871311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.871376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.161 [2024-05-13 02:49:01.871395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.871450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.161 [2024-05-13 02:49:01.871465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.871519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.161 [2024-05-13 02:49:01.871533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.161 #25 NEW cov: 12099 ft: 14181 corp: 15/751b lim: 90 exec/s: 0 rss: 71Mb L: 78/78 MS: 1 ChangeBit- 00:08:11.161 [2024-05-13 02:49:01.921140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.161 [2024-05-13 02:49:01.921165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.161 [2024-05-13 02:49:01.921223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.161 [2024-05-13 02:49:01.921244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.161 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:11.161 #26 NEW cov: 12122 ft: 14222 corp: 16/796b lim: 90 exec/s: 0 rss: 71Mb L: 45/78 MS: 1 ShuffleBytes- 00:08:11.420 [2024-05-13 02:49:01.971547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.420 [2024-05-13 02:49:01.971573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.420 [2024-05-13 02:49:01.971621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.420 [2024-05-13 02:49:01.971636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.420 [2024-05-13 02:49:01.971686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.420 [2024-05-13 02:49:01.971701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.420 [2024-05-13 02:49:01.971753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.420 [2024-05-13 02:49:01.971769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.420 #27 NEW cov: 12122 ft: 14232 corp: 17/882b lim: 90 exec/s: 0 rss: 71Mb L: 86/86 MS: 1 CMP- DE: "\245\253\336o\2540\204\000"- 00:08:11.420 [2024-05-13 02:49:02.011655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.420 [2024-05-13 02:49:02.011683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.420 [2024-05-13 02:49:02.011736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.420 [2024-05-13 02:49:02.011751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.420 [2024-05-13 02:49:02.011803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.420 [2024-05-13 02:49:02.011819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.420 [2024-05-13 02:49:02.011873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.420 [2024-05-13 02:49:02.011888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.420 #28 NEW cov: 12122 ft: 14307 corp: 18/956b lim: 90 exec/s: 28 rss: 71Mb L: 74/86 MS: 1 CrossOver- 00:08:11.421 [2024-05-13 02:49:02.051455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.421 [2024-05-13 02:49:02.051481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.051534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.421 [2024-05-13 02:49:02.051549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.421 #29 NEW cov: 12122 ft: 14334 corp: 19/1000b lim: 90 exec/s: 29 rss: 71Mb L: 44/86 MS: 1 CrossOver- 00:08:11.421 [2024-05-13 02:49:02.091885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.421 [2024-05-13 02:49:02.091912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.091973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.421 [2024-05-13 02:49:02.091991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.092045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.421 [2024-05-13 02:49:02.092061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.092115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.421 [2024-05-13 02:49:02.092131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.421 #30 NEW cov: 12122 ft: 14343 corp: 20/1080b lim: 90 exec/s: 30 rss: 71Mb L: 80/86 MS: 1 CrossOver- 00:08:11.421 [2024-05-13 02:49:02.131991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.421 [2024-05-13 02:49:02.132018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.132081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.421 [2024-05-13 02:49:02.132098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.132148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.421 [2024-05-13 02:49:02.132163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.132217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.421 [2024-05-13 02:49:02.132232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.421 #31 NEW cov: 12122 ft: 14346 corp: 21/1160b lim: 90 exec/s: 31 rss: 71Mb L: 80/86 MS: 1 ChangeBinInt- 00:08:11.421 [2024-05-13 02:49:02.181916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.421 [2024-05-13 02:49:02.181942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.181992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.421 [2024-05-13 02:49:02.182009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.421 #32 NEW cov: 12122 ft: 14363 corp: 22/1204b lim: 90 exec/s: 32 rss: 71Mb L: 44/86 MS: 1 ChangeByte- 00:08:11.421 [2024-05-13 02:49:02.222298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.421 [2024-05-13 02:49:02.222326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.222361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.421 [2024-05-13 02:49:02.222376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.222434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.421 [2024-05-13 02:49:02.222449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.421 [2024-05-13 02:49:02.222505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.421 [2024-05-13 02:49:02.222522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.680 #33 NEW cov: 12122 ft: 14386 corp: 23/1278b lim: 90 exec/s: 33 rss: 71Mb L: 74/86 MS: 1 CMP- DE: "\036\000"- 00:08:11.680 [2024-05-13 02:49:02.272405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.680 [2024-05-13 02:49:02.272432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.272494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.680 [2024-05-13 02:49:02.272509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.272561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.680 [2024-05-13 02:49:02.272576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.272628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.680 [2024-05-13 02:49:02.272643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.680 #34 NEW cov: 12122 ft: 14389 corp: 24/1352b lim: 90 exec/s: 34 rss: 71Mb L: 74/86 MS: 1 ChangeBinInt- 00:08:11.680 [2024-05-13 02:49:02.322269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.680 [2024-05-13 02:49:02.322296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.322334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.680 [2024-05-13 02:49:02.322347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.680 #35 NEW cov: 12122 ft: 14407 corp: 25/1396b lim: 90 exec/s: 35 rss: 72Mb L: 44/86 MS: 1 ChangeByte- 00:08:11.680 [2024-05-13 02:49:02.362275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.680 [2024-05-13 02:49:02.362303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.680 #36 NEW cov: 12122 ft: 15202 corp: 26/1415b lim: 90 exec/s: 36 rss: 72Mb L: 19/86 MS: 1 InsertRepeatedBytes- 00:08:11.680 [2024-05-13 02:49:02.402492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.680 [2024-05-13 02:49:02.402519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.402582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.680 [2024-05-13 02:49:02.402598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.680 #37 NEW cov: 12122 ft: 15216 corp: 27/1460b lim: 90 exec/s: 37 rss: 72Mb L: 45/86 MS: 1 InsertByte- 00:08:11.680 [2024-05-13 02:49:02.442599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.680 [2024-05-13 02:49:02.442625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.442676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.680 [2024-05-13 02:49:02.442691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.680 #38 NEW cov: 12122 ft: 15239 corp: 28/1504b lim: 90 exec/s: 38 rss: 72Mb L: 44/86 MS: 1 ShuffleBytes- 00:08:11.680 [2024-05-13 02:49:02.482695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.680 [2024-05-13 02:49:02.482722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.680 [2024-05-13 02:49:02.482770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.680 [2024-05-13 02:49:02.482787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.939 #39 NEW cov: 12122 ft: 15290 corp: 29/1549b lim: 90 exec/s: 39 rss: 72Mb L: 45/86 MS: 1 ChangeByte- 00:08:11.939 [2024-05-13 02:49:02.522669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.939 [2024-05-13 02:49:02.522696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.939 #40 NEW cov: 12122 ft: 15311 corp: 30/1575b lim: 90 exec/s: 40 rss: 72Mb L: 26/86 MS: 1 EraseBytes- 00:08:11.939 [2024-05-13 02:49:02.563063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.939 [2024-05-13 02:49:02.563089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.563128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.939 [2024-05-13 02:49:02.563143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.563194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.939 [2024-05-13 02:49:02.563209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.939 #41 NEW cov: 12122 ft: 15357 corp: 31/1643b lim: 90 exec/s: 41 rss: 72Mb L: 68/86 MS: 1 ChangeBit- 00:08:11.939 [2024-05-13 02:49:02.613251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.939 [2024-05-13 02:49:02.613279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.613314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.939 [2024-05-13 02:49:02.613329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.613386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.939 [2024-05-13 02:49:02.613403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.939 #42 NEW cov: 12122 ft: 15361 corp: 32/1711b lim: 90 exec/s: 42 rss: 72Mb L: 68/86 MS: 1 ChangeBit- 00:08:11.939 [2024-05-13 02:49:02.663227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.939 [2024-05-13 02:49:02.663255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.663321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.939 [2024-05-13 02:49:02.663337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.939 #43 NEW cov: 12122 ft: 15365 corp: 33/1756b lim: 90 exec/s: 43 rss: 72Mb L: 45/86 MS: 1 ChangeASCIIInt- 00:08:11.939 [2024-05-13 02:49:02.703640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:11.939 [2024-05-13 02:49:02.703667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.703730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:11.939 [2024-05-13 02:49:02.703746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.703798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:11.939 [2024-05-13 02:49:02.703817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.939 [2024-05-13 02:49:02.703870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:11.939 [2024-05-13 02:49:02.703886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.939 #44 NEW cov: 12122 ft: 15378 corp: 34/1839b lim: 90 exec/s: 44 rss: 72Mb L: 83/86 MS: 1 InsertRepeatedBytes- 00:08:12.198 [2024-05-13 02:49:02.743720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.743747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 [2024-05-13 02:49:02.743779] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:12.198 [2024-05-13 02:49:02.743794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.198 #45 NEW cov: 12122 ft: 15471 corp: 35/1890b lim: 90 exec/s: 45 rss: 72Mb L: 51/86 MS: 1 InsertRepeatedBytes- 00:08:12.198 [2024-05-13 02:49:02.783448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.783475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 #46 NEW cov: 12122 ft: 15479 corp: 36/1916b lim: 90 exec/s: 46 rss: 72Mb L: 26/86 MS: 1 ShuffleBytes- 00:08:12.198 [2024-05-13 02:49:02.833740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.833766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 [2024-05-13 02:49:02.833801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:12.198 [2024-05-13 02:49:02.833816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.198 #47 NEW cov: 12122 ft: 15522 corp: 37/1965b lim: 90 exec/s: 47 rss: 73Mb L: 49/86 MS: 1 InsertRepeatedBytes- 00:08:12.198 [2024-05-13 02:49:02.873853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.873880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 [2024-05-13 02:49:02.873932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:12.198 [2024-05-13 02:49:02.873948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.198 #48 NEW cov: 12122 ft: 15528 corp: 38/2003b lim: 90 exec/s: 48 rss: 73Mb L: 38/86 MS: 1 ShuffleBytes- 00:08:12.198 [2024-05-13 02:49:02.913978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.914005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 [2024-05-13 02:49:02.914071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:12.198 [2024-05-13 02:49:02.914087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.198 #49 NEW cov: 12122 ft: 15552 corp: 39/2047b lim: 90 exec/s: 49 rss: 73Mb L: 44/86 MS: 1 ChangeBit- 00:08:12.198 [2024-05-13 02:49:02.954007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.954035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 [2024-05-13 02:49:02.954109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:12.198 [2024-05-13 02:49:02.954125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.198 #50 NEW cov: 12122 ft: 15589 corp: 40/2098b lim: 90 exec/s: 50 rss: 73Mb L: 51/86 MS: 1 ChangeBinInt- 00:08:12.198 [2024-05-13 02:49:02.994124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:12.198 [2024-05-13 02:49:02.994150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.198 [2024-05-13 02:49:02.994200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:12.198 [2024-05-13 02:49:02.994215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.456 #51 NEW cov: 12122 ft: 15602 corp: 41/2149b lim: 90 exec/s: 25 rss: 73Mb L: 51/86 MS: 1 ShuffleBytes- 00:08:12.456 #51 DONE cov: 12122 ft: 15602 corp: 41/2149b lim: 90 exec/s: 25 rss: 73Mb 00:08:12.457 ###### Recommended dictionary. ###### 00:08:12.457 "\245\253\336o\2540\204\000" # Uses: 0 00:08:12.457 "\036\000" # Uses: 0 00:08:12.457 ###### End of recommended dictionary. ###### 00:08:12.457 Done 51 runs in 2 second(s) 00:08:12.457 [2024-05-13 02:49:03.022421] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.457 02:49:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:12.457 [2024-05-13 02:49:03.183675] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:12.457 [2024-05-13 02:49:03.183748] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506955 ] 00:08:12.457 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.715 [2024-05-13 02:49:03.395111] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.715 [2024-05-13 02:49:03.433040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.715 [2024-05-13 02:49:03.461511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.715 [2024-05-13 02:49:03.513749] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.973 [2024-05-13 02:49:03.529706] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:12.973 [2024-05-13 02:49:03.530119] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:12.973 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.973 INFO: Seed: 3634570118 00:08:12.973 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:12.973 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:12.973 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:12.973 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.973 #2 INITED exec/s: 0 rss: 63Mb 00:08:12.973 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.973 This may also happen if the target rejected all inputs we tried so far 00:08:12.973 [2024-05-13 02:49:03.574686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:12.974 [2024-05-13 02:49:03.574721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.232 NEW_FUNC[1/685]: 0x4c9130 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:13.232 NEW_FUNC[2/685]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:13.232 #12 NEW cov: 11820 ft: 11854 corp: 2/16b lim: 50 exec/s: 0 rss: 70Mb L: 15/15 MS: 5 ChangeByte-ChangeBit-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:08:13.232 [2024-05-13 02:49:03.905467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.232 [2024-05-13 02:49:03.905506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.232 NEW_FUNC[1/2]: 0x1d21a80 in spdk_thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1161 00:08:13.232 NEW_FUNC[2/2]: 0x1d22260 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1061 00:08:13.232 #18 NEW cov: 11983 ft: 12341 corp: 3/35b lim: 50 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CMP- DE: "\377\377\377\007"- 00:08:13.232 [2024-05-13 02:49:03.975530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.232 [2024-05-13 02:49:03.975561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.232 #19 NEW cov: 11989 ft: 12728 corp: 4/50b lim: 50 exec/s: 0 rss: 70Mb L: 15/19 MS: 1 ShuffleBytes- 00:08:13.232 [2024-05-13 02:49:04.025652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.232 [2024-05-13 02:49:04.025681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.489 #20 NEW cov: 12074 ft: 13009 corp: 5/69b lim: 50 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 ChangeBit- 00:08:13.489 [2024-05-13 02:49:04.095915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.489 [2024-05-13 02:49:04.095945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.489 [2024-05-13 02:49:04.095995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.489 [2024-05-13 02:49:04.096018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.489 #21 NEW cov: 12074 ft: 13845 corp: 6/89b lim: 50 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:08:13.489 [2024-05-13 02:49:04.166007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.489 [2024-05-13 02:49:04.166038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.489 #22 NEW cov: 12074 ft: 13909 corp: 7/104b lim: 50 exec/s: 0 rss: 70Mb L: 15/20 MS: 1 ChangeBinInt- 00:08:13.489 [2024-05-13 02:49:04.216161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.489 [2024-05-13 02:49:04.216192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.489 #23 NEW cov: 12074 ft: 14016 corp: 8/119b lim: 50 exec/s: 0 rss: 70Mb L: 15/20 MS: 1 PersAutoDict- DE: "\377\377\377\007"- 00:08:13.489 [2024-05-13 02:49:04.266266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.489 [2024-05-13 02:49:04.266296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.747 #24 NEW cov: 12074 ft: 14069 corp: 9/134b lim: 50 exec/s: 0 rss: 70Mb L: 15/20 MS: 1 ChangeBit- 00:08:13.747 [2024-05-13 02:49:04.336439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.748 [2024-05-13 02:49:04.336468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.748 #25 NEW cov: 12074 ft: 14184 corp: 10/152b lim: 50 exec/s: 0 rss: 70Mb L: 18/20 MS: 1 EraseBytes- 00:08:13.748 [2024-05-13 02:49:04.387513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.748 [2024-05-13 02:49:04.387553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.748 [2024-05-13 02:49:04.387603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.748 [2024-05-13 02:49:04.387619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.748 [2024-05-13 02:49:04.387672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:13.748 [2024-05-13 02:49:04.387688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.748 #26 NEW cov: 12074 ft: 14546 corp: 11/182b lim: 50 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:08:13.748 [2024-05-13 02:49:04.437315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.748 [2024-05-13 02:49:04.437342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.748 #30 NEW cov: 12074 ft: 14678 corp: 12/197b lim: 50 exec/s: 0 rss: 70Mb L: 15/30 MS: 4 CrossOver-EraseBytes-InsertByte-InsertRepeatedBytes- 00:08:13.748 [2024-05-13 02:49:04.477419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.748 [2024-05-13 02:49:04.477447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.748 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:13.748 #31 NEW cov: 12091 ft: 14718 corp: 13/212b lim: 50 exec/s: 0 rss: 70Mb L: 15/30 MS: 1 ChangeByte- 00:08:13.748 [2024-05-13 02:49:04.517670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:13.748 [2024-05-13 02:49:04.517698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.748 [2024-05-13 02:49:04.517756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:13.748 [2024-05-13 02:49:04.517772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.748 #32 NEW cov: 12091 ft: 14750 corp: 14/233b lim: 50 exec/s: 0 rss: 70Mb L: 21/30 MS: 1 CrossOver- 00:08:14.006 [2024-05-13 02:49:04.557942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.006 [2024-05-13 02:49:04.557970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.006 [2024-05-13 02:49:04.558005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.006 [2024-05-13 02:49:04.558021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.006 [2024-05-13 02:49:04.558077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:14.006 [2024-05-13 02:49:04.558093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.006 #33 NEW cov: 12091 ft: 14800 corp: 15/271b lim: 50 exec/s: 33 rss: 70Mb L: 38/38 MS: 1 CMP- DE: "\000\2040\2631\376%\200"- 00:08:14.006 [2024-05-13 02:49:04.607760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.006 [2024-05-13 02:49:04.607789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.006 #34 NEW cov: 12091 ft: 14836 corp: 16/290b lim: 50 exec/s: 34 rss: 70Mb L: 19/38 MS: 1 ShuffleBytes- 00:08:14.006 [2024-05-13 02:49:04.647879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.006 [2024-05-13 02:49:04.647906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.006 #35 NEW cov: 12091 ft: 14891 corp: 17/303b lim: 50 exec/s: 35 rss: 70Mb L: 13/38 MS: 1 EraseBytes- 00:08:14.006 [2024-05-13 02:49:04.688136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.006 [2024-05-13 02:49:04.688160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.006 [2024-05-13 02:49:04.688179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.006 [2024-05-13 02:49:04.688191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.006 #36 NEW cov: 12091 ft: 14918 corp: 18/329b lim: 50 exec/s: 36 rss: 70Mb L: 26/38 MS: 1 InsertRepeatedBytes- 00:08:14.006 [2024-05-13 02:49:04.738576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.006 [2024-05-13 02:49:04.738604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.006 [2024-05-13 02:49:04.738649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.006 [2024-05-13 02:49:04.738665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.006 [2024-05-13 02:49:04.738720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:14.006 [2024-05-13 02:49:04.738735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.006 [2024-05-13 02:49:04.738789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:14.006 [2024-05-13 02:49:04.738804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.006 #37 NEW cov: 12091 ft: 15264 corp: 19/369b lim: 50 exec/s: 37 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:08:14.006 [2024-05-13 02:49:04.788324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.006 [2024-05-13 02:49:04.788351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 #38 NEW cov: 12091 ft: 15289 corp: 20/388b lim: 50 exec/s: 38 rss: 70Mb L: 19/40 MS: 1 CopyPart- 00:08:14.265 [2024-05-13 02:49:04.828362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.265 [2024-05-13 02:49:04.828396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 #40 NEW cov: 12091 ft: 15313 corp: 21/405b lim: 50 exec/s: 40 rss: 70Mb L: 17/40 MS: 2 EraseBytes-PersAutoDict- DE: "\000\2040\2631\376%\200"- 00:08:14.265 [2024-05-13 02:49:04.868622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.265 [2024-05-13 02:49:04.868649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 [2024-05-13 02:49:04.868685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.265 [2024-05-13 02:49:04.868700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.265 #41 NEW cov: 12091 ft: 15352 corp: 22/431b lim: 50 exec/s: 41 rss: 71Mb L: 26/40 MS: 1 CrossOver- 00:08:14.265 [2024-05-13 02:49:04.918634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.265 [2024-05-13 02:49:04.918661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 #42 NEW cov: 12091 ft: 15394 corp: 23/446b lim: 50 exec/s: 42 rss: 71Mb L: 15/40 MS: 1 ChangeBit- 00:08:14.265 [2024-05-13 02:49:04.958794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.265 [2024-05-13 02:49:04.958821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 #43 NEW cov: 12091 ft: 15435 corp: 24/461b lim: 50 exec/s: 43 rss: 71Mb L: 15/40 MS: 1 ShuffleBytes- 00:08:14.265 [2024-05-13 02:49:04.998908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.265 [2024-05-13 02:49:04.998935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 #44 NEW cov: 12091 ft: 15469 corp: 25/478b lim: 50 exec/s: 44 rss: 71Mb L: 17/40 MS: 1 ShuffleBytes- 00:08:14.265 [2024-05-13 02:49:05.039158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.265 [2024-05-13 02:49:05.039185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.265 [2024-05-13 02:49:05.039221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.265 [2024-05-13 02:49:05.039236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.265 #45 NEW cov: 12091 ft: 15484 corp: 26/501b lim: 50 exec/s: 45 rss: 71Mb L: 23/40 MS: 1 PersAutoDict- DE: "\000\2040\2631\376%\200"- 00:08:14.523 [2024-05-13 02:49:05.079268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.523 [2024-05-13 02:49:05.079295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.523 [2024-05-13 02:49:05.079362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.523 [2024-05-13 02:49:05.079385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.523 #46 NEW cov: 12091 ft: 15490 corp: 27/529b lim: 50 exec/s: 46 rss: 71Mb L: 28/40 MS: 1 CrossOver- 00:08:14.523 [2024-05-13 02:49:05.119554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.523 [2024-05-13 02:49:05.119580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.523 [2024-05-13 02:49:05.119628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.523 [2024-05-13 02:49:05.119644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.523 [2024-05-13 02:49:05.119701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:14.524 [2024-05-13 02:49:05.119716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.524 #47 NEW cov: 12091 ft: 15573 corp: 28/559b lim: 50 exec/s: 47 rss: 71Mb L: 30/40 MS: 1 ChangeByte- 00:08:14.524 [2024-05-13 02:49:05.159505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.524 [2024-05-13 02:49:05.159532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.524 [2024-05-13 02:49:05.159582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.524 [2024-05-13 02:49:05.159598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.524 #48 NEW cov: 12091 ft: 15581 corp: 29/579b lim: 50 exec/s: 48 rss: 71Mb L: 20/40 MS: 1 EraseBytes- 00:08:14.524 [2024-05-13 02:49:05.199477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.524 [2024-05-13 02:49:05.199503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.524 #49 NEW cov: 12091 ft: 15588 corp: 30/594b lim: 50 exec/s: 49 rss: 71Mb L: 15/40 MS: 1 PersAutoDict- DE: "\000\2040\2631\376%\200"- 00:08:14.524 [2024-05-13 02:49:05.239612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.524 [2024-05-13 02:49:05.239639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.524 #50 NEW cov: 12091 ft: 15597 corp: 31/609b lim: 50 exec/s: 50 rss: 71Mb L: 15/40 MS: 1 ChangeByte- 00:08:14.524 [2024-05-13 02:49:05.279835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.524 [2024-05-13 02:49:05.279862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.524 [2024-05-13 02:49:05.279913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.524 [2024-05-13 02:49:05.279929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.524 #51 NEW cov: 12091 ft: 15607 corp: 32/632b lim: 50 exec/s: 51 rss: 71Mb L: 23/40 MS: 1 PersAutoDict- DE: "\000\2040\2631\376%\200"- 00:08:14.524 [2024-05-13 02:49:05.319851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.524 [2024-05-13 02:49:05.319877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.781 #52 NEW cov: 12091 ft: 15628 corp: 33/651b lim: 50 exec/s: 52 rss: 71Mb L: 19/40 MS: 1 ChangeBinInt- 00:08:14.782 [2024-05-13 02:49:05.360257] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.782 [2024-05-13 02:49:05.360283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.782 [2024-05-13 02:49:05.360344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.782 [2024-05-13 02:49:05.360360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.782 [2024-05-13 02:49:05.360421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:14.782 [2024-05-13 02:49:05.360439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.782 #53 NEW cov: 12091 ft: 15641 corp: 34/685b lim: 50 exec/s: 53 rss: 71Mb L: 34/40 MS: 1 CMP- DE: "\000\000\000\014"- 00:08:14.782 [2024-05-13 02:49:05.410109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.782 [2024-05-13 02:49:05.410135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.782 #59 NEW cov: 12091 ft: 15643 corp: 35/702b lim: 50 exec/s: 59 rss: 71Mb L: 17/40 MS: 1 ChangeBinInt- 00:08:14.782 [2024-05-13 02:49:05.450331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.782 [2024-05-13 02:49:05.450357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.782 [2024-05-13 02:49:05.450414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.782 [2024-05-13 02:49:05.450431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.782 #60 NEW cov: 12098 ft: 15664 corp: 36/722b lim: 50 exec/s: 60 rss: 71Mb L: 20/40 MS: 1 ChangeByte- 00:08:14.782 [2024-05-13 02:49:05.490839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.782 [2024-05-13 02:49:05.490866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.782 [2024-05-13 02:49:05.490932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:14.782 [2024-05-13 02:49:05.490949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.782 [2024-05-13 02:49:05.491002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:14.782 [2024-05-13 02:49:05.491019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.782 [2024-05-13 02:49:05.491072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:14.782 [2024-05-13 02:49:05.491088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.782 #61 NEW cov: 12098 ft: 15673 corp: 37/762b lim: 50 exec/s: 61 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:14.782 [2024-05-13 02:49:05.540448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.782 [2024-05-13 02:49:05.540475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.782 #62 NEW cov: 12098 ft: 15682 corp: 38/778b lim: 50 exec/s: 62 rss: 72Mb L: 16/40 MS: 1 InsertByte- 00:08:14.782 [2024-05-13 02:49:05.580587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:14.782 [2024-05-13 02:49:05.580615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.041 #63 NEW cov: 12098 ft: 15744 corp: 39/793b lim: 50 exec/s: 31 rss: 72Mb L: 15/40 MS: 1 ChangeByte- 00:08:15.041 #63 DONE cov: 12098 ft: 15744 corp: 39/793b lim: 50 exec/s: 31 rss: 72Mb 00:08:15.041 ###### Recommended dictionary. ###### 00:08:15.041 "\377\377\377\007" # Uses: 1 00:08:15.041 "\000\2040\2631\376%\200" # Uses: 4 00:08:15.041 "\000\000\000\014" # Uses: 0 00:08:15.041 ###### End of recommended dictionary. ###### 00:08:15.041 Done 63 runs in 2 second(s) 00:08:15.041 [2024-05-13 02:49:05.602304] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:15.041 02:49:05 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:15.041 [2024-05-13 02:49:05.764801] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:15.041 [2024-05-13 02:49:05.764872] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507257 ] 00:08:15.041 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.300 [2024-05-13 02:49:05.908430] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:15.300 [2024-05-13 02:49:05.947928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.300 [2024-05-13 02:49:05.969659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.300 [2024-05-13 02:49:06.022253] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.300 [2024-05-13 02:49:06.038207] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:15.300 [2024-05-13 02:49:06.038630] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:15.300 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.300 INFO: Seed: 1845627644 00:08:15.300 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:15.300 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:15.300 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:15.300 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.300 #2 INITED exec/s: 0 rss: 63Mb 00:08:15.300 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.300 This may also happen if the target rejected all inputs we tried so far 00:08:15.559 [2024-05-13 02:49:06.107429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.559 [2024-05-13 02:49:06.107467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.817 NEW_FUNC[1/687]: 0x4cb3f0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:15.817 NEW_FUNC[2/687]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.817 #5 NEW cov: 11878 ft: 11879 corp: 2/24b lim: 85 exec/s: 0 rss: 70Mb L: 23/23 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:08:15.817 [2024-05-13 02:49:06.438487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.817 [2024-05-13 02:49:06.438547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.817 #6 NEW cov: 12009 ft: 12506 corp: 3/48b lim: 85 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 CrossOver- 00:08:15.817 [2024-05-13 02:49:06.488455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.817 [2024-05-13 02:49:06.488489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.817 #7 NEW cov: 12015 ft: 12673 corp: 4/73b lim: 85 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 CrossOver- 00:08:15.817 [2024-05-13 02:49:06.539073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.817 [2024-05-13 02:49:06.539108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.817 [2024-05-13 02:49:06.539242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:15.817 [2024-05-13 02:49:06.539270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:15.817 #8 NEW cov: 12100 ft: 13698 corp: 5/114b lim: 85 exec/s: 0 rss: 70Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:08:15.817 [2024-05-13 02:49:06.588951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:15.817 [2024-05-13 02:49:06.588978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:15.817 #9 NEW cov: 12100 ft: 13821 corp: 6/138b lim: 85 exec/s: 0 rss: 70Mb L: 24/41 MS: 1 CrossOver- 00:08:16.075 [2024-05-13 02:49:06.649049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-05-13 02:49:06.649074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 #10 NEW cov: 12100 ft: 13966 corp: 7/162b lim: 85 exec/s: 0 rss: 70Mb L: 24/41 MS: 1 EraseBytes- 00:08:16.075 [2024-05-13 02:49:06.709284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-05-13 02:49:06.709310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 #11 NEW cov: 12100 ft: 14153 corp: 8/186b lim: 85 exec/s: 0 rss: 70Mb L: 24/41 MS: 1 ChangeByte- 00:08:16.075 [2024-05-13 02:49:06.759428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-05-13 02:49:06.759460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 #12 NEW cov: 12100 ft: 14208 corp: 9/209b lim: 85 exec/s: 0 rss: 70Mb L: 23/41 MS: 1 EraseBytes- 00:08:16.075 [2024-05-13 02:49:06.809509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.075 [2024-05-13 02:49:06.809540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.075 #13 NEW cov: 12100 ft: 14234 corp: 10/229b lim: 85 exec/s: 0 rss: 70Mb L: 20/41 MS: 1 InsertRepeatedBytes- 00:08:16.076 [2024-05-13 02:49:06.859809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.076 [2024-05-13 02:49:06.859837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.334 #14 NEW cov: 12100 ft: 14275 corp: 11/252b lim: 85 exec/s: 0 rss: 70Mb L: 23/41 MS: 1 EraseBytes- 00:08:16.334 [2024-05-13 02:49:06.919969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.334 [2024-05-13 02:49:06.919999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.334 #15 NEW cov: 12100 ft: 14298 corp: 12/275b lim: 85 exec/s: 0 rss: 70Mb L: 23/41 MS: 1 ChangeBinInt- 00:08:16.334 [2024-05-13 02:49:06.970053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.334 [2024-05-13 02:49:06.970083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.334 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:16.334 #16 NEW cov: 12123 ft: 14352 corp: 13/300b lim: 85 exec/s: 0 rss: 70Mb L: 25/41 MS: 1 CrossOver- 00:08:16.334 [2024-05-13 02:49:07.030333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.334 [2024-05-13 02:49:07.030362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.334 #17 NEW cov: 12123 ft: 14369 corp: 14/323b lim: 85 exec/s: 0 rss: 70Mb L: 23/41 MS: 1 CopyPart- 00:08:16.334 [2024-05-13 02:49:07.090359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.334 [2024-05-13 02:49:07.090389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.334 #18 NEW cov: 12123 ft: 14384 corp: 15/347b lim: 85 exec/s: 18 rss: 71Mb L: 24/41 MS: 1 ChangeByte- 00:08:16.592 [2024-05-13 02:49:07.140879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.592 [2024-05-13 02:49:07.140914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.592 [2024-05-13 02:49:07.141040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.592 [2024-05-13 02:49:07.141065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.592 #19 NEW cov: 12123 ft: 14387 corp: 16/395b lim: 85 exec/s: 19 rss: 71Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:08:16.592 [2024-05-13 02:49:07.190738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.592 [2024-05-13 02:49:07.190763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.592 #20 NEW cov: 12123 ft: 14401 corp: 17/418b lim: 85 exec/s: 20 rss: 71Mb L: 23/48 MS: 1 ChangeByte- 00:08:16.592 [2024-05-13 02:49:07.250851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.592 [2024-05-13 02:49:07.250877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.592 #21 NEW cov: 12123 ft: 14445 corp: 18/438b lim: 85 exec/s: 21 rss: 71Mb L: 20/48 MS: 1 EraseBytes- 00:08:16.592 [2024-05-13 02:49:07.311299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.592 [2024-05-13 02:49:07.311338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.592 [2024-05-13 02:49:07.311474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.592 [2024-05-13 02:49:07.311498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.593 #22 NEW cov: 12123 ft: 14491 corp: 19/474b lim: 85 exec/s: 22 rss: 71Mb L: 36/48 MS: 1 InsertRepeatedBytes- 00:08:16.593 [2024-05-13 02:49:07.371595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.593 [2024-05-13 02:49:07.371629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.593 [2024-05-13 02:49:07.371766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.593 [2024-05-13 02:49:07.371791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.851 #23 NEW cov: 12123 ft: 14514 corp: 20/519b lim: 85 exec/s: 23 rss: 71Mb L: 45/48 MS: 1 CrossOver- 00:08:16.851 [2024-05-13 02:49:07.431454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.851 [2024-05-13 02:49:07.431480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.851 #24 NEW cov: 12123 ft: 14595 corp: 21/541b lim: 85 exec/s: 24 rss: 71Mb L: 22/48 MS: 1 EraseBytes- 00:08:16.851 [2024-05-13 02:49:07.491627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.851 [2024-05-13 02:49:07.491653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.851 #25 NEW cov: 12123 ft: 14608 corp: 22/566b lim: 85 exec/s: 25 rss: 71Mb L: 25/48 MS: 1 ChangeBinInt- 00:08:16.851 [2024-05-13 02:49:07.542086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.851 [2024-05-13 02:49:07.542113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.851 [2024-05-13 02:49:07.542256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.851 [2024-05-13 02:49:07.542283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.851 #26 NEW cov: 12123 ft: 14670 corp: 23/605b lim: 85 exec/s: 26 rss: 71Mb L: 39/48 MS: 1 InsertRepeatedBytes- 00:08:16.851 [2024-05-13 02:49:07.592214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.851 [2024-05-13 02:49:07.592246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:16.851 [2024-05-13 02:49:07.592385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:16.851 [2024-05-13 02:49:07.592416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:16.851 #27 NEW cov: 12123 ft: 14678 corp: 24/644b lim: 85 exec/s: 27 rss: 71Mb L: 39/48 MS: 1 ChangeBit- 00:08:16.851 [2024-05-13 02:49:07.652112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:16.851 [2024-05-13 02:49:07.652149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 #28 NEW cov: 12123 ft: 14698 corp: 25/667b lim: 85 exec/s: 28 rss: 71Mb L: 23/48 MS: 1 ChangeBit- 00:08:17.109 [2024-05-13 02:49:07.702607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.109 [2024-05-13 02:49:07.702643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 [2024-05-13 02:49:07.702786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:17.109 [2024-05-13 02:49:07.702816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.109 #29 NEW cov: 12123 ft: 14768 corp: 26/706b lim: 85 exec/s: 29 rss: 71Mb L: 39/48 MS: 1 ChangeBinInt- 00:08:17.109 [2024-05-13 02:49:07.752785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.109 [2024-05-13 02:49:07.752818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 [2024-05-13 02:49:07.752966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:17.109 [2024-05-13 02:49:07.752994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.109 #30 NEW cov: 12123 ft: 14818 corp: 27/746b lim: 85 exec/s: 30 rss: 71Mb L: 40/48 MS: 1 InsertByte- 00:08:17.109 [2024-05-13 02:49:07.802711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.109 [2024-05-13 02:49:07.802738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.109 #31 NEW cov: 12123 ft: 14846 corp: 28/769b lim: 85 exec/s: 31 rss: 71Mb L: 23/48 MS: 1 ChangeByte- 00:08:17.110 [2024-05-13 02:49:07.853159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.110 [2024-05-13 02:49:07.853193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.110 [2024-05-13 02:49:07.853347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:17.110 [2024-05-13 02:49:07.853373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.110 #32 NEW cov: 12123 ft: 14854 corp: 29/817b lim: 85 exec/s: 32 rss: 72Mb L: 48/48 MS: 1 CMP- DE: ",\257\246\374\265\006\000\000"- 00:08:17.110 [2024-05-13 02:49:07.913000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.110 [2024-05-13 02:49:07.913027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 #37 NEW cov: 12123 ft: 14860 corp: 30/838b lim: 85 exec/s: 37 rss: 72Mb L: 21/48 MS: 5 InsertByte-EraseBytes-ShuffleBytes-InsertRepeatedBytes-PersAutoDict- DE: ",\257\246\374\265\006\000\000"- 00:08:17.368 [2024-05-13 02:49:07.963579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.368 [2024-05-13 02:49:07.963614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 [2024-05-13 02:49:07.963745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:17.368 [2024-05-13 02:49:07.963767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:17.368 #38 NEW cov: 12123 ft: 14868 corp: 31/879b lim: 85 exec/s: 38 rss: 72Mb L: 41/48 MS: 1 PersAutoDict- DE: ",\257\246\374\265\006\000\000"- 00:08:17.368 [2024-05-13 02:49:08.013394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.368 [2024-05-13 02:49:08.013422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 #39 NEW cov: 12123 ft: 14883 corp: 32/902b lim: 85 exec/s: 39 rss: 72Mb L: 23/48 MS: 1 ShuffleBytes- 00:08:17.368 [2024-05-13 02:49:08.073461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:17.368 [2024-05-13 02:49:08.073493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:17.368 #40 NEW cov: 12123 ft: 14901 corp: 33/923b lim: 85 exec/s: 20 rss: 72Mb L: 21/48 MS: 1 ChangeBinInt- 00:08:17.368 #40 DONE cov: 12123 ft: 14901 corp: 33/923b lim: 85 exec/s: 20 rss: 72Mb 00:08:17.368 ###### Recommended dictionary. ###### 00:08:17.368 ",\257\246\374\265\006\000\000" # Uses: 2 00:08:17.368 ###### End of recommended dictionary. ###### 00:08:17.368 Done 40 runs in 2 second(s) 00:08:17.368 [2024-05-13 02:49:08.100830] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:17.626 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:17.627 02:49:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:17.627 [2024-05-13 02:49:08.262434] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:17.627 [2024-05-13 02:49:08.262497] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507781 ] 00:08:17.627 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.627 [2024-05-13 02:49:08.406366] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.885 [2024-05-13 02:49:08.445462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.885 [2024-05-13 02:49:08.467329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.885 [2024-05-13 02:49:08.519559] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.885 [2024-05-13 02:49:08.535501] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:17.885 [2024-05-13 02:49:08.535915] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:17.885 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.885 INFO: Seed: 48655882 00:08:17.885 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:17.885 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:17.885 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:17.885 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.885 #2 INITED exec/s: 0 rss: 63Mb 00:08:17.885 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.885 This may also happen if the target rejected all inputs we tried so far 00:08:17.885 [2024-05-13 02:49:08.584715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:17.885 [2024-05-13 02:49:08.584745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.144 NEW_FUNC[1/686]: 0x4ce620 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:18.144 NEW_FUNC[2/686]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:18.144 #15 NEW cov: 11812 ft: 11813 corp: 2/8b lim: 25 exec/s: 0 rss: 70Mb L: 7/7 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:18.144 [2024-05-13 02:49:08.895421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.144 [2024-05-13 02:49:08.895456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.144 #16 NEW cov: 11942 ft: 12462 corp: 3/15b lim: 25 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ShuffleBytes- 00:08:18.144 [2024-05-13 02:49:08.945505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.144 [2024-05-13 02:49:08.945535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.403 #17 NEW cov: 11948 ft: 12685 corp: 4/22b lim: 25 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeByte- 00:08:18.403 [2024-05-13 02:49:08.985559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.403 [2024-05-13 02:49:08.985587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.403 #18 NEW cov: 12033 ft: 12895 corp: 5/29b lim: 25 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeBit- 00:08:18.403 [2024-05-13 02:49:09.025707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.403 [2024-05-13 02:49:09.025733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.403 #19 NEW cov: 12033 ft: 12994 corp: 6/36b lim: 25 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeBit- 00:08:18.403 [2024-05-13 02:49:09.065870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.403 [2024-05-13 02:49:09.065898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.403 [2024-05-13 02:49:09.065942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.403 [2024-05-13 02:49:09.065957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.403 #20 NEW cov: 12033 ft: 13400 corp: 7/49b lim: 25 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CrossOver- 00:08:18.403 [2024-05-13 02:49:09.115905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.403 [2024-05-13 02:49:09.115932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.403 #23 NEW cov: 12033 ft: 13459 corp: 8/54b lim: 25 exec/s: 0 rss: 70Mb L: 5/13 MS: 3 EraseBytes-ChangeByte-InsertByte- 00:08:18.403 [2024-05-13 02:49:09.156058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.403 [2024-05-13 02:49:09.156088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.403 #24 NEW cov: 12033 ft: 13575 corp: 9/61b lim: 25 exec/s: 0 rss: 70Mb L: 7/13 MS: 1 ChangeByte- 00:08:18.403 [2024-05-13 02:49:09.196130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.403 [2024-05-13 02:49:09.196157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.662 #25 NEW cov: 12033 ft: 13643 corp: 10/68b lim: 25 exec/s: 0 rss: 70Mb L: 7/13 MS: 1 ChangeBit- 00:08:18.662 [2024-05-13 02:49:09.236285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.662 [2024-05-13 02:49:09.236312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.662 #26 NEW cov: 12033 ft: 13719 corp: 11/75b lim: 25 exec/s: 0 rss: 70Mb L: 7/13 MS: 1 ChangeBinInt- 00:08:18.662 [2024-05-13 02:49:09.276630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.662 [2024-05-13 02:49:09.276659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.662 [2024-05-13 02:49:09.276699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.662 [2024-05-13 02:49:09.276714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.662 [2024-05-13 02:49:09.276770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.662 [2024-05-13 02:49:09.276786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.662 #32 NEW cov: 12033 ft: 14008 corp: 12/92b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:08:18.662 [2024-05-13 02:49:09.326539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.662 [2024-05-13 02:49:09.326566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.663 #33 NEW cov: 12033 ft: 14053 corp: 13/100b lim: 25 exec/s: 0 rss: 70Mb L: 8/17 MS: 1 InsertByte- 00:08:18.663 [2024-05-13 02:49:09.366633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.663 [2024-05-13 02:49:09.366661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.663 #34 NEW cov: 12033 ft: 14069 corp: 14/108b lim: 25 exec/s: 0 rss: 70Mb L: 8/17 MS: 1 InsertByte- 00:08:18.663 [2024-05-13 02:49:09.406715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.663 [2024-05-13 02:49:09.406742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.663 #35 NEW cov: 12033 ft: 14081 corp: 15/116b lim: 25 exec/s: 0 rss: 70Mb L: 8/17 MS: 1 InsertByte- 00:08:18.663 [2024-05-13 02:49:09.446825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.663 [2024-05-13 02:49:09.446852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 #36 NEW cov: 12033 ft: 14106 corp: 16/123b lim: 25 exec/s: 0 rss: 70Mb L: 7/17 MS: 1 ChangeBit- 00:08:18.922 [2024-05-13 02:49:09.487144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.922 [2024-05-13 02:49:09.487171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.487219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.922 [2024-05-13 02:49:09.487237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.487292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.922 [2024-05-13 02:49:09.487308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.922 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:18.922 #37 NEW cov: 12056 ft: 14160 corp: 17/140b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeBit- 00:08:18.922 [2024-05-13 02:49:09.537206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.922 [2024-05-13 02:49:09.537233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.537286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.922 [2024-05-13 02:49:09.537303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.922 #38 NEW cov: 12056 ft: 14197 corp: 18/152b lim: 25 exec/s: 0 rss: 70Mb L: 12/17 MS: 1 CrossOver- 00:08:18.922 [2024-05-13 02:49:09.577167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.922 [2024-05-13 02:49:09.577195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 #39 NEW cov: 12056 ft: 14208 corp: 19/159b lim: 25 exec/s: 39 rss: 70Mb L: 7/17 MS: 1 ShuffleBytes- 00:08:18.922 [2024-05-13 02:49:09.617319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.922 [2024-05-13 02:49:09.617345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 #44 NEW cov: 12056 ft: 14232 corp: 20/164b lim: 25 exec/s: 44 rss: 71Mb L: 5/17 MS: 5 EraseBytes-ChangeBit-ChangeByte-ChangeByte-InsertByte- 00:08:18.922 [2024-05-13 02:49:09.657472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.922 [2024-05-13 02:49:09.657499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 #45 NEW cov: 12056 ft: 14237 corp: 21/170b lim: 25 exec/s: 45 rss: 71Mb L: 6/17 MS: 1 InsertByte- 00:08:18.922 [2024-05-13 02:49:09.697989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:18.922 [2024-05-13 02:49:09.698017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.698063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:18.922 [2024-05-13 02:49:09.698079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.698132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:18.922 [2024-05-13 02:49:09.698147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.698202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:18.922 [2024-05-13 02:49:09.698217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:18.922 [2024-05-13 02:49:09.698271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:18.922 [2024-05-13 02:49:09.698285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:18.922 #46 NEW cov: 12056 ft: 14716 corp: 22/195b lim: 25 exec/s: 46 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:08:19.181 [2024-05-13 02:49:09.737642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.737670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 #47 NEW cov: 12056 ft: 14736 corp: 23/200b lim: 25 exec/s: 47 rss: 71Mb L: 5/25 MS: 1 ShuffleBytes- 00:08:19.181 [2024-05-13 02:49:09.777884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.777911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 [2024-05-13 02:49:09.777962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.181 [2024-05-13 02:49:09.777978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.181 #48 NEW cov: 12056 ft: 14744 corp: 24/211b lim: 25 exec/s: 48 rss: 71Mb L: 11/25 MS: 1 CrossOver- 00:08:19.181 [2024-05-13 02:49:09.817851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.817878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 #49 NEW cov: 12056 ft: 14762 corp: 25/216b lim: 25 exec/s: 49 rss: 71Mb L: 5/25 MS: 1 EraseBytes- 00:08:19.181 [2024-05-13 02:49:09.848220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.848247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 [2024-05-13 02:49:09.848282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.181 [2024-05-13 02:49:09.848298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.181 [2024-05-13 02:49:09.848352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:19.181 [2024-05-13 02:49:09.848367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.181 #50 NEW cov: 12056 ft: 14789 corp: 26/234b lim: 25 exec/s: 50 rss: 71Mb L: 18/25 MS: 1 InsertByte- 00:08:19.181 [2024-05-13 02:49:09.888080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.888106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 [2024-05-13 02:49:09.918191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.918217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 #52 NEW cov: 12056 ft: 14795 corp: 27/243b lim: 25 exec/s: 52 rss: 71Mb L: 9/25 MS: 2 InsertByte-InsertByte- 00:08:19.181 [2024-05-13 02:49:09.948513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.181 [2024-05-13 02:49:09.948540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.181 [2024-05-13 02:49:09.948603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.181 [2024-05-13 02:49:09.948619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.181 [2024-05-13 02:49:09.948674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:19.181 [2024-05-13 02:49:09.948689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.181 #58 NEW cov: 12056 ft: 14807 corp: 28/258b lim: 25 exec/s: 58 rss: 71Mb L: 15/25 MS: 1 InsertRepeatedBytes- 00:08:19.440 [2024-05-13 02:49:09.998558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.440 [2024-05-13 02:49:09.998585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:09.998649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.440 [2024-05-13 02:49:09.998665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.440 #59 NEW cov: 12056 ft: 14812 corp: 29/271b lim: 25 exec/s: 59 rss: 71Mb L: 13/25 MS: 1 CopyPart- 00:08:19.440 [2024-05-13 02:49:10.038697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.440 [2024-05-13 02:49:10.038726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.038776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.440 [2024-05-13 02:49:10.038792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.440 #60 NEW cov: 12056 ft: 14818 corp: 30/283b lim: 25 exec/s: 60 rss: 71Mb L: 12/25 MS: 1 ChangeByte- 00:08:19.440 [2024-05-13 02:49:10.078679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.440 [2024-05-13 02:49:10.078709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.440 #61 NEW cov: 12056 ft: 14826 corp: 31/289b lim: 25 exec/s: 61 rss: 71Mb L: 6/25 MS: 1 EraseBytes- 00:08:19.440 [2024-05-13 02:49:10.129316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.440 [2024-05-13 02:49:10.129345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.129400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.440 [2024-05-13 02:49:10.129416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.129470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:19.440 [2024-05-13 02:49:10.129485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.129538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:19.440 [2024-05-13 02:49:10.129553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.129608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:19.440 [2024-05-13 02:49:10.129624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:19.440 #65 NEW cov: 12056 ft: 14832 corp: 32/314b lim: 25 exec/s: 65 rss: 71Mb L: 25/25 MS: 4 EraseBytes-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:08:19.440 [2024-05-13 02:49:10.169175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.440 [2024-05-13 02:49:10.169202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.169237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.440 [2024-05-13 02:49:10.169254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.440 [2024-05-13 02:49:10.169311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:19.440 [2024-05-13 02:49:10.169327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.440 #66 NEW cov: 12056 ft: 14853 corp: 33/329b lim: 25 exec/s: 66 rss: 71Mb L: 15/25 MS: 1 ShuffleBytes- 00:08:19.440 [2024-05-13 02:49:10.219068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.440 [2024-05-13 02:49:10.219095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.440 #68 NEW cov: 12056 ft: 14907 corp: 34/338b lim: 25 exec/s: 68 rss: 71Mb L: 9/25 MS: 2 CopyPart-CMP- DE: "\341D\013:\2610\204\000"- 00:08:19.699 [2024-05-13 02:49:10.259174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.699 [2024-05-13 02:49:10.259202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.699 #69 NEW cov: 12056 ft: 14967 corp: 35/344b lim: 25 exec/s: 69 rss: 71Mb L: 6/25 MS: 1 ChangeBit- 00:08:19.699 [2024-05-13 02:49:10.299415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.699 [2024-05-13 02:49:10.299444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.699 [2024-05-13 02:49:10.299480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.699 [2024-05-13 02:49:10.299496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.699 #70 NEW cov: 12056 ft: 14970 corp: 36/355b lim: 25 exec/s: 70 rss: 72Mb L: 11/25 MS: 1 ChangeBit- 00:08:19.699 [2024-05-13 02:49:10.339427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.699 [2024-05-13 02:49:10.339455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.699 #71 NEW cov: 12056 ft: 14977 corp: 37/362b lim: 25 exec/s: 71 rss: 72Mb L: 7/25 MS: 1 ChangeBit- 00:08:19.699 [2024-05-13 02:49:10.379491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.699 [2024-05-13 02:49:10.379519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.699 #72 NEW cov: 12056 ft: 15046 corp: 38/369b lim: 25 exec/s: 72 rss: 72Mb L: 7/25 MS: 1 ChangeBit- 00:08:19.699 [2024-05-13 02:49:10.419636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.699 [2024-05-13 02:49:10.419665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.699 #78 NEW cov: 12056 ft: 15060 corp: 39/377b lim: 25 exec/s: 78 rss: 72Mb L: 8/25 MS: 1 InsertByte- 00:08:19.700 [2024-05-13 02:49:10.459730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.700 [2024-05-13 02:49:10.459758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.700 #85 NEW cov: 12056 ft: 15070 corp: 40/382b lim: 25 exec/s: 85 rss: 72Mb L: 5/25 MS: 2 EraseBytes-InsertByte- 00:08:19.700 [2024-05-13 02:49:10.490050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.700 [2024-05-13 02:49:10.490078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.700 [2024-05-13 02:49:10.490132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:19.700 [2024-05-13 02:49:10.490148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:19.700 [2024-05-13 02:49:10.490209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:19.700 [2024-05-13 02:49:10.490224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:19.959 #86 NEW cov: 12056 ft: 15079 corp: 41/398b lim: 25 exec/s: 86 rss: 72Mb L: 16/25 MS: 1 InsertByte- 00:08:19.959 [2024-05-13 02:49:10.539991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:19.959 [2024-05-13 02:49:10.540017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:19.959 #87 NEW cov: 12056 ft: 15100 corp: 42/403b lim: 25 exec/s: 43 rss: 72Mb L: 5/25 MS: 1 CrossOver- 00:08:19.959 #87 DONE cov: 12056 ft: 15100 corp: 42/403b lim: 25 exec/s: 43 rss: 72Mb 00:08:19.959 ###### Recommended dictionary. ###### 00:08:19.959 "\341D\013:\2610\204\000" # Uses: 0 00:08:19.959 ###### End of recommended dictionary. ###### 00:08:19.959 Done 87 runs in 2 second(s) 00:08:19.959 [2024-05-13 02:49:10.567641] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:19.959 02:49:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:19.959 [2024-05-13 02:49:10.729770] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:19.959 [2024-05-13 02:49:10.729866] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508210 ] 00:08:20.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.218 [2024-05-13 02:49:10.872483] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.219 [2024-05-13 02:49:10.910658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.219 [2024-05-13 02:49:10.933550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.219 [2024-05-13 02:49:10.986049] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.219 [2024-05-13 02:49:11.002004] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:20.219 [2024-05-13 02:49:11.002417] tcp.c: 966:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:20.219 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.219 INFO: Seed: 2515672482 00:08:20.478 INFO: Loaded 1 modules (350429 inline 8-bit counters): 350429 [0x279074c, 0x27e6029), 00:08:20.478 INFO: Loaded 1 PC tables (350429 PCs): 350429 [0x27e6030,0x2d3ee00), 00:08:20.478 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:20.478 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.478 #2 INITED exec/s: 0 rss: 63Mb 00:08:20.478 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.478 This may also happen if the target rejected all inputs we tried so far 00:08:20.478 [2024-05-13 02:49:11.068404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.478 [2024-05-13 02:49:11.068460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.478 [2024-05-13 02:49:11.068592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.478 [2024-05-13 02:49:11.068615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.737 NEW_FUNC[1/687]: 0x4cf700 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:20.737 NEW_FUNC[2/687]: 0x4e0360 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.737 #5 NEW cov: 11884 ft: 11876 corp: 2/58b lim: 100 exec/s: 0 rss: 70Mb L: 57/57 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:08:20.737 [2024-05-13 02:49:11.389304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.389350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.737 [2024-05-13 02:49:11.389489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.389515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.737 #6 NEW cov: 12014 ft: 12503 corp: 3/116b lim: 100 exec/s: 0 rss: 70Mb L: 58/58 MS: 1 CrossOver- 00:08:20.737 [2024-05-13 02:49:11.439232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.439264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.737 [2024-05-13 02:49:11.439390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.439417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.737 #7 NEW cov: 12020 ft: 12679 corp: 4/174b lim: 100 exec/s: 0 rss: 70Mb L: 58/58 MS: 1 ShuffleBytes- 00:08:20.737 [2024-05-13 02:49:11.499278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.499313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.737 #11 NEW cov: 12105 ft: 13828 corp: 5/211b lim: 100 exec/s: 0 rss: 70Mb L: 37/58 MS: 4 CrossOver-ChangeByte-ChangeBit-CrossOver- 00:08:20.737 [2024-05-13 02:49:11.539584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.539616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.737 [2024-05-13 02:49:11.539740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.737 [2024-05-13 02:49:11.539768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.996 #15 NEW cov: 12105 ft: 14092 corp: 6/257b lim: 100 exec/s: 0 rss: 70Mb L: 46/58 MS: 4 ChangeByte-CMP-ChangeBit-InsertRepeatedBytes- DE: "\000\000\000\020"- 00:08:20.996 [2024-05-13 02:49:11.579655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.579687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.996 [2024-05-13 02:49:11.579811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.579833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.996 #16 NEW cov: 12105 ft: 14161 corp: 7/307b lim: 100 exec/s: 0 rss: 70Mb L: 50/58 MS: 1 PersAutoDict- DE: "\000\000\000\020"- 00:08:20.996 [2024-05-13 02:49:11.619897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.619928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.996 [2024-05-13 02:49:11.620052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.620074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.996 #17 NEW cov: 12105 ft: 14201 corp: 8/354b lim: 100 exec/s: 0 rss: 70Mb L: 47/58 MS: 1 InsertByte- 00:08:20.996 [2024-05-13 02:49:11.659787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:32768 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.659813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.996 #18 NEW cov: 12105 ft: 14246 corp: 9/391b lim: 100 exec/s: 0 rss: 70Mb L: 37/58 MS: 1 ChangeBit- 00:08:20.996 [2024-05-13 02:49:11.709737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.709774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.996 [2024-05-13 02:49:11.709900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.709925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:20.996 #19 NEW cov: 12105 ft: 14381 corp: 10/442b lim: 100 exec/s: 0 rss: 71Mb L: 51/58 MS: 1 PersAutoDict- DE: "\000\000\000\020"- 00:08:20.996 [2024-05-13 02:49:11.770133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:20.996 [2024-05-13 02:49:11.770169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:20.996 #20 NEW cov: 12105 ft: 14415 corp: 11/479b lim: 100 exec/s: 0 rss: 71Mb L: 37/58 MS: 1 ShuffleBytes- 00:08:21.255 [2024-05-13 02:49:11.810324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.255 [2024-05-13 02:49:11.810357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.255 [2024-05-13 02:49:11.810492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.255 [2024-05-13 02:49:11.810519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.255 #21 NEW cov: 12105 ft: 14454 corp: 12/534b lim: 100 exec/s: 0 rss: 71Mb L: 55/58 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:21.255 [2024-05-13 02:49:11.850366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.255 [2024-05-13 02:49:11.850397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.255 [2024-05-13 02:49:11.850532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.255 [2024-05-13 02:49:11.850556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.255 #22 NEW cov: 12105 ft: 14464 corp: 13/589b lim: 100 exec/s: 0 rss: 71Mb L: 55/58 MS: 1 CrossOver- 00:08:21.255 [2024-05-13 02:49:11.890576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.255 [2024-05-13 02:49:11.890606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.256 [2024-05-13 02:49:11.890730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:11.890754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.256 #23 NEW cov: 12105 ft: 14477 corp: 14/646b lim: 100 exec/s: 0 rss: 71Mb L: 57/58 MS: 1 CrossOver- 00:08:21.256 [2024-05-13 02:49:11.931223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:11.931259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.256 [2024-05-13 02:49:11.931388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:11.931412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.256 [2024-05-13 02:49:11.931538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446742999967727615 len:1286 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:11.931562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.256 [2024-05-13 02:49:11.931683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:361700864190383365 len:1286 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:11.931702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.256 NEW_FUNC[1/1]: 0x1a07120 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:21.256 #24 NEW cov: 12128 ft: 14939 corp: 15/731b lim: 100 exec/s: 0 rss: 71Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:08:21.256 [2024-05-13 02:49:11.970602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:11.970630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.256 #25 NEW cov: 12128 ft: 14950 corp: 16/761b lim: 100 exec/s: 0 rss: 71Mb L: 30/85 MS: 1 EraseBytes- 00:08:21.256 [2024-05-13 02:49:12.020809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:12.020841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.256 [2024-05-13 02:49:12.020981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.256 [2024-05-13 02:49:12.021009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.256 #26 NEW cov: 12128 ft: 14958 corp: 17/811b lim: 100 exec/s: 26 rss: 71Mb L: 50/85 MS: 1 ChangeBinInt- 00:08:21.515 [2024-05-13 02:49:12.070657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.070695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.070819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.070844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.515 #27 NEW cov: 12128 ft: 14975 corp: 18/866b lim: 100 exec/s: 27 rss: 71Mb L: 55/85 MS: 1 ChangeBinInt- 00:08:21.515 [2024-05-13 02:49:12.121297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.121333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.121451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.121476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.121591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.121615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.121747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.121772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.515 #28 NEW cov: 12128 ft: 15054 corp: 19/949b lim: 100 exec/s: 28 rss: 71Mb L: 83/85 MS: 1 InsertRepeatedBytes- 00:08:21.515 [2024-05-13 02:49:12.182015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.182046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.182184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6438275385384763391 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.182206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.182326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.182352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.182483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.182509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:21.515 #29 NEW cov: 12128 ft: 15094 corp: 20/1032b lim: 100 exec/s: 29 rss: 72Mb L: 83/85 MS: 1 CopyPart- 00:08:21.515 [2024-05-13 02:49:12.241303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709420543 len:32768 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.241331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.515 #30 NEW cov: 12128 ft: 15125 corp: 21/1069b lim: 100 exec/s: 30 rss: 72Mb L: 37/85 MS: 1 ChangeBit- 00:08:21.515 [2024-05-13 02:49:12.301817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.301845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.515 [2024-05-13 02:49:12.301974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.515 [2024-05-13 02:49:12.302000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.775 #31 NEW cov: 12128 ft: 15176 corp: 22/1125b lim: 100 exec/s: 31 rss: 72Mb L: 56/85 MS: 1 EraseBytes- 00:08:21.775 [2024-05-13 02:49:12.362095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.362123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.775 [2024-05-13 02:49:12.362262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4179340454199820032 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.362286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.775 #32 NEW cov: 12128 ft: 15216 corp: 23/1182b lim: 100 exec/s: 32 rss: 72Mb L: 57/85 MS: 1 ChangeBinInt- 00:08:21.775 [2024-05-13 02:49:12.401456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.401483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.775 #33 NEW cov: 12128 ft: 15235 corp: 24/1212b lim: 100 exec/s: 33 rss: 72Mb L: 30/85 MS: 1 ChangeASCIIInt- 00:08:21.775 [2024-05-13 02:49:12.441901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.441929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.775 [2024-05-13 02:49:12.442050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.442087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.775 #34 NEW cov: 12128 ft: 15279 corp: 25/1262b lim: 100 exec/s: 34 rss: 72Mb L: 50/85 MS: 1 EraseBytes- 00:08:21.775 [2024-05-13 02:49:12.492163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.492191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.775 #35 NEW cov: 12128 ft: 15344 corp: 26/1292b lim: 100 exec/s: 35 rss: 72Mb L: 30/85 MS: 1 ChangeByte- 00:08:21.775 [2024-05-13 02:49:12.553186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.553220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:21.775 [2024-05-13 02:49:12.553304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:6438275385384763391 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.553328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:21.775 [2024-05-13 02:49:12.553449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.553475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:21.775 [2024-05-13 02:49:12.553600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:6438275382588823897 len:22874 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:21.775 [2024-05-13 02:49:12.553625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:22.034 #36 NEW cov: 12128 ft: 15361 corp: 27/1383b lim: 100 exec/s: 36 rss: 72Mb L: 91/91 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:22.034 [2024-05-13 02:49:12.602520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.602547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.034 #37 NEW cov: 12128 ft: 15385 corp: 28/1418b lim: 100 exec/s: 37 rss: 72Mb L: 35/91 MS: 1 CrossOver- 00:08:22.034 [2024-05-13 02:49:12.653042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.653074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.034 [2024-05-13 02:49:12.653180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.653202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.034 [2024-05-13 02:49:12.653321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.653345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:22.034 #38 NEW cov: 12128 ft: 15679 corp: 29/1483b lim: 100 exec/s: 38 rss: 72Mb L: 65/91 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:22.034 [2024-05-13 02:49:12.692515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.692551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.034 [2024-05-13 02:49:12.692682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65462 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.692708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.034 #39 NEW cov: 12128 ft: 15755 corp: 30/1541b lim: 100 exec/s: 39 rss: 72Mb L: 58/91 MS: 1 InsertByte- 00:08:22.034 [2024-05-13 02:49:12.733139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.733169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.034 [2024-05-13 02:49:12.733298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446743154586550271 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.733322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.034 #40 NEW cov: 12128 ft: 15761 corp: 31/1598b lim: 100 exec/s: 40 rss: 72Mb L: 57/91 MS: 1 ChangeByte- 00:08:22.034 [2024-05-13 02:49:12.772916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072066105343 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.772942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.034 #41 NEW cov: 12128 ft: 15763 corp: 32/1634b lim: 100 exec/s: 41 rss: 72Mb L: 36/91 MS: 1 InsertByte- 00:08:22.034 [2024-05-13 02:49:12.823058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.034 [2024-05-13 02:49:12.823085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.293 #42 NEW cov: 12128 ft: 15769 corp: 33/1669b lim: 100 exec/s: 42 rss: 73Mb L: 35/91 MS: 1 ShuffleBytes- 00:08:22.293 [2024-05-13 02:49:12.863257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.293 [2024-05-13 02:49:12.863285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.293 #43 NEW cov: 12128 ft: 15772 corp: 34/1704b lim: 100 exec/s: 43 rss: 73Mb L: 35/91 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:22.293 [2024-05-13 02:49:12.903530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.293 [2024-05-13 02:49:12.903564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.293 [2024-05-13 02:49:12.903689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.293 [2024-05-13 02:49:12.903715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.293 #44 NEW cov: 12128 ft: 15810 corp: 35/1752b lim: 100 exec/s: 44 rss: 73Mb L: 48/91 MS: 1 InsertByte- 00:08:22.293 [2024-05-13 02:49:12.943429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.293 [2024-05-13 02:49:12.943455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.294 [2024-05-13 02:49:12.943595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.294 [2024-05-13 02:49:12.943624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.294 #45 NEW cov: 12128 ft: 15820 corp: 36/1794b lim: 100 exec/s: 45 rss: 73Mb L: 42/91 MS: 1 EraseBytes- 00:08:22.294 [2024-05-13 02:49:12.983071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072066105343 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.294 [2024-05-13 02:49:12.983097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.294 #46 NEW cov: 12128 ft: 15841 corp: 37/1830b lim: 100 exec/s: 46 rss: 73Mb L: 36/91 MS: 1 ChangeBinInt- 00:08:22.294 [2024-05-13 02:49:13.033963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.294 [2024-05-13 02:49:13.033995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:22.294 [2024-05-13 02:49:13.034112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073575333887 len:65291 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:22.294 [2024-05-13 02:49:13.034134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:22.294 #47 NEW cov: 12128 ft: 15850 corp: 38/1886b lim: 100 exec/s: 23 rss: 73Mb L: 56/91 MS: 1 ChangeBit- 00:08:22.294 #47 DONE cov: 12128 ft: 15850 corp: 38/1886b lim: 100 exec/s: 23 rss: 73Mb 00:08:22.294 ###### Recommended dictionary. ###### 00:08:22.294 "\000\000\000\020" # Uses: 2 00:08:22.294 "\001\000\000\000\000\000\000\000" # Uses: 3 00:08:22.294 ###### End of recommended dictionary. ###### 00:08:22.294 Done 47 runs in 2 second(s) 00:08:22.294 [2024-05-13 02:49:13.061646] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:22.553 02:49:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:22.553 02:49:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.553 02:49:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.553 02:49:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:22.553 00:08:22.553 real 1m4.414s 00:08:22.553 user 1m39.329s 00:08:22.553 sys 0m8.599s 00:08:22.553 02:49:13 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.553 02:49:13 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:22.553 ************************************ 00:08:22.553 END TEST nvmf_fuzz 00:08:22.553 ************************************ 00:08:22.553 02:49:13 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:22.553 02:49:13 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:22.553 02:49:13 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:22.553 02:49:13 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:22.553 02:49:13 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.553 02:49:13 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:22.553 ************************************ 00:08:22.553 START TEST vfio_fuzz 00:08:22.553 ************************************ 00:08:22.553 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:22.816 * Looking for test storage... 00:08:22.816 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:22.816 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:22.817 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:22.817 #define SPDK_CONFIG_H 00:08:22.817 #define SPDK_CONFIG_APPS 1 00:08:22.817 #define SPDK_CONFIG_ARCH native 00:08:22.817 #undef SPDK_CONFIG_ASAN 00:08:22.817 #undef SPDK_CONFIG_AVAHI 00:08:22.817 #undef SPDK_CONFIG_CET 00:08:22.817 #define SPDK_CONFIG_COVERAGE 1 00:08:22.817 #define SPDK_CONFIG_CROSS_PREFIX 00:08:22.817 #undef SPDK_CONFIG_CRYPTO 00:08:22.817 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:22.817 #undef SPDK_CONFIG_CUSTOMOCF 00:08:22.817 #undef SPDK_CONFIG_DAOS 00:08:22.817 #define SPDK_CONFIG_DAOS_DIR 00:08:22.817 #define SPDK_CONFIG_DEBUG 1 00:08:22.817 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:22.817 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:22.817 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:22.817 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:22.817 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:22.817 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:22.817 #define SPDK_CONFIG_EXAMPLES 1 00:08:22.817 #undef SPDK_CONFIG_FC 00:08:22.817 #define SPDK_CONFIG_FC_PATH 00:08:22.817 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:22.817 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:22.817 #undef SPDK_CONFIG_FUSE 00:08:22.817 #define SPDK_CONFIG_FUZZER 1 00:08:22.817 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:22.817 #undef SPDK_CONFIG_GOLANG 00:08:22.817 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:22.817 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:22.817 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:22.817 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:22.817 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:22.817 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:22.817 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:22.817 #define SPDK_CONFIG_IDXD 1 00:08:22.817 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:22.817 #undef SPDK_CONFIG_IPSEC_MB 00:08:22.817 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:22.817 #define SPDK_CONFIG_ISAL 1 00:08:22.817 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:22.817 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:22.817 #define SPDK_CONFIG_LIBDIR 00:08:22.817 #undef SPDK_CONFIG_LTO 00:08:22.817 #define SPDK_CONFIG_MAX_LCORES 00:08:22.817 #define SPDK_CONFIG_NVME_CUSE 1 00:08:22.817 #undef SPDK_CONFIG_OCF 00:08:22.817 #define SPDK_CONFIG_OCF_PATH 00:08:22.817 #define SPDK_CONFIG_OPENSSL_PATH 00:08:22.817 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:22.817 #define SPDK_CONFIG_PGO_DIR 00:08:22.817 #undef SPDK_CONFIG_PGO_USE 00:08:22.817 #define SPDK_CONFIG_PREFIX /usr/local 00:08:22.817 #undef SPDK_CONFIG_RAID5F 00:08:22.817 #undef SPDK_CONFIG_RBD 00:08:22.817 #define SPDK_CONFIG_RDMA 1 00:08:22.817 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:22.817 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:22.817 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:22.817 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:22.817 #undef SPDK_CONFIG_SHARED 00:08:22.817 #undef SPDK_CONFIG_SMA 00:08:22.817 #define SPDK_CONFIG_TESTS 1 00:08:22.817 #undef SPDK_CONFIG_TSAN 00:08:22.817 #define SPDK_CONFIG_UBLK 1 00:08:22.817 #define SPDK_CONFIG_UBSAN 1 00:08:22.817 #undef SPDK_CONFIG_UNIT_TESTS 00:08:22.817 #undef SPDK_CONFIG_URING 00:08:22.817 #define SPDK_CONFIG_URING_PATH 00:08:22.817 #undef SPDK_CONFIG_URING_ZNS 00:08:22.817 #undef SPDK_CONFIG_USDT 00:08:22.817 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:22.817 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:22.817 #define SPDK_CONFIG_VFIO_USER 1 00:08:22.817 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:22.817 #define SPDK_CONFIG_VHOST 1 00:08:22.817 #define SPDK_CONFIG_VIRTIO 1 00:08:22.817 #undef SPDK_CONFIG_VTUNE 00:08:22.817 #define SPDK_CONFIG_VTUNE_DIR 00:08:22.817 #define SPDK_CONFIG_WERROR 1 00:08:22.817 #define SPDK_CONFIG_WPDK_DIR 00:08:22.817 #undef SPDK_CONFIG_XNVME 00:08:22.817 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 1 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:22.818 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : main 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:22.819 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3508627 ]] 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 3508627 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.CL1bLl 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.CL1bLl/tests/vfio /tmp/spdk.CL1bLl 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=976003072 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4308426752 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52104925184 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742305280 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=9637380096 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866440192 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871150592 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342489088 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5971968 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870355968 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871154688 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=798720 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:22.820 * Looking for test storage... 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:22.820 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=52104925184 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=11851972608 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.821 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:22.821 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:22.821 02:49:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:22.821 [2024-05-13 02:49:13.594318] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:22.821 [2024-05-13 02:49:13.594409] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508696 ] 00:08:23.080 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.080 [2024-05-13 02:49:13.631459] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.080 [2024-05-13 02:49:13.669282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.080 [2024-05-13 02:49:13.708701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.080 [2024-05-13 02:49:13.874835] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:23.339 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.339 INFO: Seed: 1092677545 00:08:23.339 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:23.339 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:23.339 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:23.339 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.339 #2 INITED exec/s: 0 rss: 64Mb 00:08:23.339 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.339 This may also happen if the target rejected all inputs we tried so far 00:08:23.339 [2024-05-13 02:49:13.950399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:23.598 NEW_FUNC[1/642]: 0x4a3680 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:23.598 NEW_FUNC[2/642]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:23.598 #17 NEW cov: 10854 ft: 10862 corp: 2/7b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 5 ChangeByte-ChangeASCIIInt-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:08:23.857 NEW_FUNC[1/2]: 0x13c2220 in handle_cmd_req /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:5564 00:08:23.857 NEW_FUNC[2/2]: 0x13ec450 in handle_sq_tdbl_write /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:2551 00:08:23.858 #18 NEW cov: 10905 ft: 14760 corp: 3/13b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:24.117 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:24.117 #29 NEW cov: 10922 ft: 15391 corp: 4/19b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 ChangeBit- 00:08:24.375 #35 NEW cov: 10922 ft: 16771 corp: 5/25b lim: 6 exec/s: 35 rss: 71Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:08:24.375 #36 NEW cov: 10922 ft: 16821 corp: 6/31b lim: 6 exec/s: 36 rss: 71Mb L: 6/6 MS: 1 ChangeASCIIInt- 00:08:24.634 #37 NEW cov: 10922 ft: 17075 corp: 7/37b lim: 6 exec/s: 37 rss: 71Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:24.634 #38 NEW cov: 10922 ft: 17446 corp: 8/43b lim: 6 exec/s: 38 rss: 71Mb L: 6/6 MS: 1 ChangeByte- 00:08:24.894 #39 NEW cov: 10922 ft: 17754 corp: 9/49b lim: 6 exec/s: 39 rss: 71Mb L: 6/6 MS: 1 CopyPart- 00:08:25.153 #44 NEW cov: 10929 ft: 18005 corp: 10/55b lim: 6 exec/s: 44 rss: 71Mb L: 6/6 MS: 5 CMP-CopyPart-ShuffleBytes-ChangeByte-CopyPart- DE: "\035\000\000\000"- 00:08:25.153 #45 NEW cov: 10929 ft: 18173 corp: 11/61b lim: 6 exec/s: 22 rss: 71Mb L: 6/6 MS: 1 ChangeByte- 00:08:25.153 #45 DONE cov: 10929 ft: 18173 corp: 11/61b lim: 6 exec/s: 22 rss: 71Mb 00:08:25.153 ###### Recommended dictionary. ###### 00:08:25.153 "\035\000\000\000" # Uses: 0 00:08:25.153 ###### End of recommended dictionary. ###### 00:08:25.153 Done 45 runs in 2 second(s) 00:08:25.412 [2024-05-13 02:49:15.972572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:25.412 [2024-05-13 02:49:16.022113] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:25.412 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:25.413 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:25.413 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:25.413 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:25.413 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:25.413 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:25.672 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:25.672 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:25.672 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.672 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:25.672 02:49:16 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:25.672 [2024-05-13 02:49:16.250530] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:25.672 [2024-05-13 02:49:16.250611] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509211 ] 00:08:25.672 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.672 [2024-05-13 02:49:16.286249] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:25.672 [2024-05-13 02:49:16.323722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.672 [2024-05-13 02:49:16.361453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.931 [2024-05-13 02:49:16.527906] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:25.931 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.931 INFO: Seed: 3745670697 00:08:25.931 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:25.931 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:25.931 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:25.931 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.931 #2 INITED exec/s: 0 rss: 63Mb 00:08:25.931 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.931 This may also happen if the target rejected all inputs we tried so far 00:08:25.931 [2024-05-13 02:49:16.596117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:25.931 [2024-05-13 02:49:16.623411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:25.931 [2024-05-13 02:49:16.623438] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:25.931 [2024-05-13 02:49:16.623457] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.190 NEW_FUNC[1/646]: 0x4a3c20 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:26.190 NEW_FUNC[2/646]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:26.190 #26 NEW cov: 10879 ft: 10360 corp: 2/5b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 4 ChangeBit-InsertByte-CopyPart-InsertByte- 00:08:26.448 [2024-05-13 02:49:17.034418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.448 [2024-05-13 02:49:17.034449] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.448 [2024-05-13 02:49:17.034484] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.448 #27 NEW cov: 10897 ft: 13552 corp: 3/9b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ChangeByte- 00:08:26.448 [2024-05-13 02:49:17.150455] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.448 [2024-05-13 02:49:17.150482] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.448 [2024-05-13 02:49:17.150518] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.448 #30 NEW cov: 10897 ft: 14487 corp: 4/13b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 3 CopyPart-CrossOver-CopyPart- 00:08:26.707 [2024-05-13 02:49:17.285502] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.707 [2024-05-13 02:49:17.285527] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.707 [2024-05-13 02:49:17.285545] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.707 #36 NEW cov: 10897 ft: 15048 corp: 5/17b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:26.707 [2024-05-13 02:49:17.400328] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.707 [2024-05-13 02:49:17.400354] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.707 [2024-05-13 02:49:17.400372] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.707 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:26.707 #37 NEW cov: 10914 ft: 15274 corp: 6/21b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeBit- 00:08:26.966 [2024-05-13 02:49:17.516496] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.966 [2024-05-13 02:49:17.516522] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.966 [2024-05-13 02:49:17.516541] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.966 #40 NEW cov: 10914 ft: 15711 corp: 7/25b lim: 4 exec/s: 40 rss: 72Mb L: 4/4 MS: 3 EraseBytes-ChangeBit-CopyPart- 00:08:26.966 [2024-05-13 02:49:17.641268] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.966 [2024-05-13 02:49:17.641293] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.966 [2024-05-13 02:49:17.641328] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:26.966 #44 NEW cov: 10914 ft: 16522 corp: 8/29b lim: 4 exec/s: 44 rss: 72Mb L: 4/4 MS: 4 EraseBytes-EraseBytes-CrossOver-InsertByte- 00:08:26.966 [2024-05-13 02:49:17.756104] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:26.966 [2024-05-13 02:49:17.756130] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:26.966 [2024-05-13 02:49:17.756149] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.247 #45 NEW cov: 10914 ft: 16577 corp: 9/33b lim: 4 exec/s: 45 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:08:27.247 [2024-05-13 02:49:17.873200] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.247 [2024-05-13 02:49:17.873226] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.247 [2024-05-13 02:49:17.873244] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.247 #49 NEW cov: 10914 ft: 16610 corp: 10/37b lim: 4 exec/s: 49 rss: 72Mb L: 4/4 MS: 4 EraseBytes-ChangeByte-ShuffleBytes-CMP- DE: "\001\000"- 00:08:27.247 [2024-05-13 02:49:17.988049] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.247 [2024-05-13 02:49:17.988074] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.247 [2024-05-13 02:49:17.988093] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.506 #50 NEW cov: 10914 ft: 16644 corp: 11/41b lim: 4 exec/s: 50 rss: 72Mb L: 4/4 MS: 1 ChangeBit- 00:08:27.506 [2024-05-13 02:49:18.103088] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.506 [2024-05-13 02:49:18.103113] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.506 [2024-05-13 02:49:18.103132] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.506 #51 NEW cov: 10914 ft: 16866 corp: 12/45b lim: 4 exec/s: 51 rss: 72Mb L: 4/4 MS: 1 CrossOver- 00:08:27.506 [2024-05-13 02:49:18.217092] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.506 [2024-05-13 02:49:18.217117] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.506 [2024-05-13 02:49:18.217135] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.506 #60 NEW cov: 10914 ft: 16953 corp: 13/49b lim: 4 exec/s: 60 rss: 72Mb L: 4/4 MS: 4 EraseBytes-EraseBytes-CopyPart-CopyPart- 00:08:27.765 [2024-05-13 02:49:18.332054] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.765 [2024-05-13 02:49:18.332080] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.765 [2024-05-13 02:49:18.332115] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:27.765 #61 NEW cov: 10921 ft: 17472 corp: 14/53b lim: 4 exec/s: 61 rss: 72Mb L: 4/4 MS: 1 PersAutoDict- DE: "\001\000"- 00:08:27.765 [2024-05-13 02:49:18.493048] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:27.765 [2024-05-13 02:49:18.493071] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:27.765 [2024-05-13 02:49:18.493089] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:28.024 #62 NEW cov: 10921 ft: 17971 corp: 15/57b lim: 4 exec/s: 31 rss: 72Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:28.024 #62 DONE cov: 10921 ft: 17971 corp: 15/57b lim: 4 exec/s: 31 rss: 72Mb 00:08:28.024 ###### Recommended dictionary. ###### 00:08:28.024 "\001\000" # Uses: 1 00:08:28.024 ###### End of recommended dictionary. ###### 00:08:28.024 Done 62 runs in 2 second(s) 00:08:28.024 [2024-05-13 02:49:18.626570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:28.024 [2024-05-13 02:49:18.676192] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:28.283 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:28.283 02:49:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:28.283 [2024-05-13 02:49:18.908919] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:28.283 [2024-05-13 02:49:18.908992] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509749 ] 00:08:28.283 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.283 [2024-05-13 02:49:18.944415] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.283 [2024-05-13 02:49:18.981374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.283 [2024-05-13 02:49:19.019619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.542 [2024-05-13 02:49:19.182756] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:28.542 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.542 INFO: Seed: 2107699066 00:08:28.542 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:28.542 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:28.542 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:28.542 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.542 #2 INITED exec/s: 0 rss: 63Mb 00:08:28.542 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.542 This may also happen if the target rejected all inputs we tried so far 00:08:28.542 [2024-05-13 02:49:19.251362] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:28.542 [2024-05-13 02:49:19.277757] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.059 NEW_FUNC[1/645]: 0x4a4600 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:29.059 NEW_FUNC[2/645]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:29.059 #4 NEW cov: 10867 ft: 10364 corp: 2/9b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:29.059 [2024-05-13 02:49:19.689251] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.059 #19 NEW cov: 10884 ft: 13253 corp: 3/17b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 5 ChangeBit-CopyPart-InsertRepeatedBytes-CrossOver-CopyPart- 00:08:29.059 [2024-05-13 02:49:19.813204] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.318 #25 NEW cov: 10884 ft: 14526 corp: 4/25b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:08:29.318 [2024-05-13 02:49:19.937568] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.318 #26 NEW cov: 10884 ft: 15511 corp: 5/33b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:08:29.318 [2024-05-13 02:49:20.096514] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.576 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:29.576 #27 NEW cov: 10901 ft: 16300 corp: 6/41b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:29.576 [2024-05-13 02:49:20.287688] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.835 #33 NEW cov: 10901 ft: 17173 corp: 7/49b lim: 8 exec/s: 33 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:29.835 [2024-05-13 02:49:20.496347] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:29.835 #34 NEW cov: 10901 ft: 17495 corp: 8/57b lim: 8 exec/s: 34 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:30.093 [2024-05-13 02:49:20.682791] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:30.093 #35 NEW cov: 10901 ft: 17819 corp: 9/65b lim: 8 exec/s: 35 rss: 71Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:30.093 [2024-05-13 02:49:20.867187] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:30.351 #41 NEW cov: 10901 ft: 18171 corp: 10/73b lim: 8 exec/s: 41 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:08:30.351 [2024-05-13 02:49:21.054869] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:30.610 #42 NEW cov: 10908 ft: 18203 corp: 11/81b lim: 8 exec/s: 42 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:08:30.610 [2024-05-13 02:49:21.240353] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:30.610 #43 NEW cov: 10908 ft: 18464 corp: 12/89b lim: 8 exec/s: 21 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:30.610 #43 DONE cov: 10908 ft: 18464 corp: 12/89b lim: 8 exec/s: 21 rss: 71Mb 00:08:30.610 Done 43 runs in 2 second(s) 00:08:30.610 [2024-05-13 02:49:21.371571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:30.869 [2024-05-13 02:49:21.420936] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:30.869 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:30.869 02:49:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:30.869 [2024-05-13 02:49:21.650257] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:30.869 [2024-05-13 02:49:21.650332] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510210 ] 00:08:31.127 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.127 [2024-05-13 02:49:21.686284] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.127 [2024-05-13 02:49:21.725247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.127 [2024-05-13 02:49:21.764516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.127 [2024-05-13 02:49:21.929870] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:31.386 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.386 INFO: Seed: 557741387 00:08:31.386 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:31.386 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:31.386 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:31.386 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.386 #2 INITED exec/s: 0 rss: 64Mb 00:08:31.386 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.386 This may also happen if the target rejected all inputs we tried so far 00:08:31.386 [2024-05-13 02:49:22.005449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:31.386 [2024-05-13 02:49:22.045411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=323 offset=0 prot=0x3: Invalid argument 00:08:31.386 [2024-05-13 02:49:22.045434] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:31.386 [2024-05-13 02:49:22.045445] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:31.386 [2024-05-13 02:49:22.045462] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.644 NEW_FUNC[1/643]: 0x4a4ce0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:31.644 NEW_FUNC[2/643]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:31.644 #58 NEW cov: 10871 ft: 10669 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:31.902 [2024-05-13 02:49:22.506104] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:31.902 [2024-05-13 02:49:22.506139] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:31.902 [2024-05-13 02:49:22.506150] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:31.902 [2024-05-13 02:49:22.506167] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:31.902 NEW_FUNC[1/2]: 0x165b4a0 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:08:31.902 NEW_FUNC[2/2]: 0x1677ed0 in _nvme_md_excluded_from_xfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:54 00:08:31.902 #69 NEW cov: 10890 ft: 13179 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 CrossOver- 00:08:31.902 [2024-05-13 02:49:22.678302] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x400000000, 0x400000000) fd=325 offset=0 prot=0x3: Invalid argument 00:08:31.902 [2024-05-13 02:49:22.678326] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x400000000, 0x400000000) offset=0 flags=0x3: Invalid argument 00:08:31.902 [2024-05-13 02:49:22.678337] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:31.902 [2024-05-13 02:49:22.678353] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.159 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:32.159 #70 NEW cov: 10907 ft: 14784 corp: 4/97b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:32.159 [2024-05-13 02:49:22.850368] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:32.159 [2024-05-13 02:49:22.850399] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:32.159 [2024-05-13 02:49:22.850411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.159 [2024-05-13 02:49:22.850428] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.159 #71 NEW cov: 10907 ft: 15208 corp: 5/129b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:32.417 [2024-05-13 02:49:23.021780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x40000000000000, 0x40000000000000) fd=325 offset=0 prot=0x3: Invalid argument 00:08:32.417 [2024-05-13 02:49:23.021804] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x40000000000000, 0x40000000000000) offset=0 flags=0x3: Invalid argument 00:08:32.417 [2024-05-13 02:49:23.021815] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.417 [2024-05-13 02:49:23.021849] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.417 #72 NEW cov: 10907 ft: 15377 corp: 6/161b lim: 32 exec/s: 72 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:32.417 [2024-05-13 02:49:23.188271] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:32.417 [2024-05-13 02:49:23.188296] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:32.417 [2024-05-13 02:49:23.188306] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.417 [2024-05-13 02:49:23.188323] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.675 #73 NEW cov: 10910 ft: 15788 corp: 7/193b lim: 32 exec/s: 73 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:32.675 [2024-05-13 02:49:23.353871] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:08:32.675 [2024-05-13 02:49:23.353894] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:08:32.675 [2024-05-13 02:49:23.353905] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.675 [2024-05-13 02:49:23.353936] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.675 #74 NEW cov: 10910 ft: 16058 corp: 8/225b lim: 32 exec/s: 74 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:32.932 [2024-05-13 02:49:23.519403] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x6200000000000000, 0x6200000000000000) fd=325 offset=0 prot=0x3: Invalid argument 00:08:32.932 [2024-05-13 02:49:23.519425] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x6200000000000000, 0x6200000000000000) offset=0 flags=0x3: Invalid argument 00:08:32.932 [2024-05-13 02:49:23.519435] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.932 [2024-05-13 02:49:23.519467] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:32.932 #75 NEW cov: 10910 ft: 16131 corp: 9/257b lim: 32 exec/s: 75 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:32.932 [2024-05-13 02:49:23.684831] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0xf400, 0xf400) fd=325 offset=0 prot=0x3: Invalid argument 00:08:32.932 [2024-05-13 02:49:23.684853] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xf400, 0xf400) offset=0 flags=0x3: Invalid argument 00:08:32.932 [2024-05-13 02:49:23.684863] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:32.932 [2024-05-13 02:49:23.684896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:33.189 #76 NEW cov: 10917 ft: 16288 corp: 10/289b lim: 32 exec/s: 76 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:33.189 [2024-05-13 02:49:23.850684] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x400000000, 0x400000000) fd=325 offset=0 prot=0x3: Invalid argument 00:08:33.189 [2024-05-13 02:49:23.850708] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x400000000, 0x400000000) offset=0 flags=0x3: Invalid argument 00:08:33.189 [2024-05-13 02:49:23.850718] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:33.189 [2024-05-13 02:49:23.850735] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:33.189 #77 NEW cov: 10917 ft: 16431 corp: 11/321b lim: 32 exec/s: 77 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:33.446 [2024-05-13 02:49:24.016580] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [0x20000000000000, 0x20000000000000) fd=325 offset=0 prot=0x3: Invalid argument 00:08:33.446 [2024-05-13 02:49:24.016604] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x20000000000000, 0x20000000000000) offset=0 flags=0x3: Invalid argument 00:08:33.446 [2024-05-13 02:49:24.016614] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:08:33.446 [2024-05-13 02:49:24.016630] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:33.446 #78 NEW cov: 10917 ft: 16651 corp: 12/353b lim: 32 exec/s: 39 rss: 71Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:33.446 #78 DONE cov: 10917 ft: 16651 corp: 12/353b lim: 32 exec/s: 39 rss: 71Mb 00:08:33.446 Done 78 runs in 2 second(s) 00:08:33.446 [2024-05-13 02:49:24.138563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:33.446 [2024-05-13 02:49:24.188101] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:33.704 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:33.704 02:49:24 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:33.705 [2024-05-13 02:49:24.419811] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:33.705 [2024-05-13 02:49:24.419884] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510593 ] 00:08:33.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.705 [2024-05-13 02:49:24.456974] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.705 [2024-05-13 02:49:24.494866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.962 [2024-05-13 02:49:24.534123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.962 [2024-05-13 02:49:24.699404] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:33.962 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.962 INFO: Seed: 3329739678 00:08:33.962 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:33.962 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:33.962 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:33.962 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.962 #2 INITED exec/s: 0 rss: 64Mb 00:08:33.962 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.962 This may also happen if the target rejected all inputs we tried so far 00:08:34.220 [2024-05-13 02:49:24.779599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:34.477 NEW_FUNC[1/645]: 0x4a5560 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:34.477 NEW_FUNC[2/645]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:34.477 #63 NEW cov: 10876 ft: 10791 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:34.734 #69 NEW cov: 10890 ft: 14433 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:34.993 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:34.993 #70 NEW cov: 10907 ft: 15092 corp: 4/97b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:34.993 #71 NEW cov: 10907 ft: 15580 corp: 5/129b lim: 32 exec/s: 71 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:35.251 #72 NEW cov: 10907 ft: 16113 corp: 6/161b lim: 32 exec/s: 72 rss: 71Mb L: 32/32 MS: 1 CrossOver- 00:08:35.509 #73 NEW cov: 10907 ft: 16310 corp: 7/193b lim: 32 exec/s: 73 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:35.767 #74 NEW cov: 10907 ft: 16445 corp: 8/225b lim: 32 exec/s: 74 rss: 71Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:35.767 #85 NEW cov: 10907 ft: 16646 corp: 9/257b lim: 32 exec/s: 85 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:36.025 #86 NEW cov: 10914 ft: 16663 corp: 10/289b lim: 32 exec/s: 86 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:36.283 #92 NEW cov: 10914 ft: 17193 corp: 11/321b lim: 32 exec/s: 46 rss: 71Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:36.283 #92 DONE cov: 10914 ft: 17193 corp: 11/321b lim: 32 exec/s: 46 rss: 71Mb 00:08:36.283 Done 92 runs in 2 second(s) 00:08:36.283 [2024-05-13 02:49:26.925580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:36.284 [2024-05-13 02:49:26.975204] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:36.542 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:36.542 02:49:27 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:36.542 [2024-05-13 02:49:27.207889] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:36.542 [2024-05-13 02:49:27.207972] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511114 ] 00:08:36.542 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.542 [2024-05-13 02:49:27.243509] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.542 [2024-05-13 02:49:27.279890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.542 [2024-05-13 02:49:27.317890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.801 [2024-05-13 02:49:27.480898] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:36.801 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.801 INFO: Seed: 1813774323 00:08:36.801 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:36.801 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:36.801 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:36.801 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.801 #2 INITED exec/s: 0 rss: 64Mb 00:08:36.801 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.801 This may also happen if the target rejected all inputs we tried so far 00:08:36.801 [2024-05-13 02:49:27.549213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:37.059 [2024-05-13 02:49:27.609442] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.059 [2024-05-13 02:49:27.609476] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.318 NEW_FUNC[1/646]: 0x4a5f60 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:37.318 NEW_FUNC[2/646]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:37.318 #54 NEW cov: 10889 ft: 10846 corp: 2/14b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:37.318 [2024-05-13 02:49:28.061019] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.318 [2024-05-13 02:49:28.061062] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.576 #55 NEW cov: 10903 ft: 13720 corp: 3/27b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:08:37.576 [2024-05-13 02:49:28.234132] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.576 [2024-05-13 02:49:28.234163] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.576 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:37.576 #56 NEW cov: 10920 ft: 14713 corp: 4/40b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 CrossOver- 00:08:37.834 [2024-05-13 02:49:28.406269] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.834 [2024-05-13 02:49:28.406299] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:37.834 #57 NEW cov: 10920 ft: 15024 corp: 5/53b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:37.834 [2024-05-13 02:49:28.575487] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:37.834 [2024-05-13 02:49:28.575518] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.092 #63 NEW cov: 10920 ft: 15529 corp: 6/66b lim: 13 exec/s: 63 rss: 71Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:38.092 [2024-05-13 02:49:28.746837] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.092 [2024-05-13 02:49:28.746868] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.092 #64 NEW cov: 10920 ft: 15686 corp: 7/79b lim: 13 exec/s: 64 rss: 71Mb L: 13/13 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\001"- 00:08:38.350 [2024-05-13 02:49:28.916988] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.350 [2024-05-13 02:49:28.917020] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.350 #65 NEW cov: 10920 ft: 15725 corp: 8/92b lim: 13 exec/s: 65 rss: 71Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:38.350 [2024-05-13 02:49:29.085962] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.350 [2024-05-13 02:49:29.085993] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.609 #66 NEW cov: 10920 ft: 16164 corp: 9/105b lim: 13 exec/s: 66 rss: 71Mb L: 13/13 MS: 1 CopyPart- 00:08:38.609 [2024-05-13 02:49:29.256325] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.609 [2024-05-13 02:49:29.256356] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.609 #72 NEW cov: 10927 ft: 16345 corp: 10/118b lim: 13 exec/s: 72 rss: 71Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:38.867 [2024-05-13 02:49:29.427260] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:38.867 [2024-05-13 02:49:29.427291] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:38.867 #73 NEW cov: 10927 ft: 17404 corp: 11/131b lim: 13 exec/s: 36 rss: 72Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:38.867 #73 DONE cov: 10927 ft: 17404 corp: 11/131b lim: 13 exec/s: 36 rss: 72Mb 00:08:38.867 ###### Recommended dictionary. ###### 00:08:38.867 "\001\000\000\000\000\000\000\001" # Uses: 0 00:08:38.867 ###### End of recommended dictionary. ###### 00:08:38.867 Done 73 runs in 2 second(s) 00:08:38.867 [2024-05-13 02:49:29.548562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:38.867 [2024-05-13 02:49:29.597652] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:39.126 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:39.126 02:49:29 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:39.126 [2024-05-13 02:49:29.830605] Starting SPDK v24.05-pre git sha1 dafdb289f / DPDK 24.07.0-rc0 initialization... 00:08:39.126 [2024-05-13 02:49:29.830702] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511648 ] 00:08:39.126 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.126 [2024-05-13 02:49:29.866360] pci_dpdk.c: 38:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.126 [2024-05-13 02:49:29.903814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.384 [2024-05-13 02:49:29.942116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.384 [2024-05-13 02:49:30.105508] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:39.384 INFO: Running with entropic power schedule (0xFF, 100). 00:08:39.384 INFO: Seed: 145812384 00:08:39.384 INFO: Loaded 1 modules (347665 inline 8-bit counters): 347665 [0x2750f8c, 0x27a5d9d), 00:08:39.384 INFO: Loaded 1 PC tables (347665 PCs): 347665 [0x27a5da0,0x2cf3eb0), 00:08:39.384 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:39.384 INFO: A corpus is not provided, starting from an empty corpus 00:08:39.384 #2 INITED exec/s: 0 rss: 63Mb 00:08:39.384 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:39.384 This may also happen if the target rejected all inputs we tried so far 00:08:39.384 [2024-05-13 02:49:30.173119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:39.642 [2024-05-13 02:49:30.245712] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:39.642 [2024-05-13 02:49:30.245743] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:39.901 NEW_FUNC[1/646]: 0x4a6c50 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:39.901 NEW_FUNC[2/646]: 0x4a9190 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:39.901 #5 NEW cov: 10878 ft: 10575 corp: 2/10b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:40.159 [2024-05-13 02:49:30.748662] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.159 [2024-05-13 02:49:30.748706] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.159 #6 NEW cov: 10892 ft: 13556 corp: 3/19b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CrossOver- 00:08:40.159 [2024-05-13 02:49:30.932483] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.159 [2024-05-13 02:49:30.932519] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.418 NEW_FUNC[1/1]: 0x19d3650 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:40.418 #7 NEW cov: 10909 ft: 15241 corp: 4/28b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 ChangeByte- 00:08:40.418 [2024-05-13 02:49:31.125228] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.418 [2024-05-13 02:49:31.125262] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.676 #23 NEW cov: 10909 ft: 15783 corp: 5/37b lim: 9 exec/s: 23 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:08:40.676 [2024-05-13 02:49:31.310037] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.676 [2024-05-13 02:49:31.310071] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.676 #24 NEW cov: 10909 ft: 15978 corp: 6/46b lim: 9 exec/s: 24 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:08:40.934 [2024-05-13 02:49:31.489786] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.934 [2024-05-13 02:49:31.489815] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:40.934 #25 NEW cov: 10909 ft: 16006 corp: 7/55b lim: 9 exec/s: 25 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:40.934 [2024-05-13 02:49:31.679609] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:40.934 [2024-05-13 02:49:31.679639] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:41.192 #31 NEW cov: 10912 ft: 16190 corp: 8/64b lim: 9 exec/s: 31 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:08:41.192 [2024-05-13 02:49:31.863219] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:41.192 [2024-05-13 02:49:31.863249] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:41.192 #32 NEW cov: 10919 ft: 16364 corp: 9/73b lim: 9 exec/s: 32 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:08:41.451 [2024-05-13 02:49:32.049950] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:41.451 [2024-05-13 02:49:32.049981] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:41.451 #48 NEW cov: 10919 ft: 16741 corp: 10/82b lim: 9 exec/s: 24 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:08:41.451 #48 DONE cov: 10919 ft: 16741 corp: 10/82b lim: 9 exec/s: 24 rss: 72Mb 00:08:41.451 Done 48 runs in 2 second(s) 00:08:41.451 [2024-05-13 02:49:32.176578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:41.451 [2024-05-13 02:49:32.221683] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:41.709 02:49:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:41.709 02:49:32 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:41.709 02:49:32 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:41.709 02:49:32 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:41.709 00:08:41.709 real 0m19.146s 00:08:41.709 user 0m26.721s 00:08:41.709 sys 0m1.814s 00:08:41.709 02:49:32 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.709 02:49:32 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:41.709 ************************************ 00:08:41.709 END TEST vfio_fuzz 00:08:41.709 ************************************ 00:08:41.709 02:49:32 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:41.709 00:08:41.709 real 1m23.855s 00:08:41.709 user 2m6.149s 00:08:41.709 sys 0m10.624s 00:08:41.709 02:49:32 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.709 02:49:32 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:41.709 ************************************ 00:08:41.709 END TEST llvm_fuzz 00:08:41.709 ************************************ 00:08:41.709 02:49:32 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:08:41.709 02:49:32 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:08:41.709 02:49:32 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:08:41.709 02:49:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:41.709 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:08:41.709 02:49:32 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:08:41.709 02:49:32 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:08:41.709 02:49:32 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:08:41.709 02:49:32 -- common/autotest_common.sh@10 -- # set +x 00:08:48.266 INFO: APP EXITING 00:08:48.266 INFO: killing all VMs 00:08:48.266 INFO: killing vhost app 00:08:48.266 INFO: EXIT DONE 00:08:50.799 Waiting for block devices as requested 00:08:50.799 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:50.799 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:50.799 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:50.799 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:50.799 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:50.799 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:51.057 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:51.057 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:51.057 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:51.057 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:51.315 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:51.315 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:51.315 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:51.574 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:51.574 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:51.574 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:51.833 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:55.136 Cleaning 00:08:55.137 Removing: /dev/shm/spdk_tgt_trace.pid3478339 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3475894 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3476954 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3478339 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3478796 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3479771 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3479904 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3481015 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3481025 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3481425 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3481741 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3481827 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3482147 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3482460 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3482648 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3482806 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3483107 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3483949 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3486853 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3487139 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3487387 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3487438 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3487948 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3488011 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3488327 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3488463 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3488735 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3488878 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3488969 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3489136 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3489555 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3489838 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3490123 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3490201 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3490492 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3490513 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3490711 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3490905 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3491158 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3491441 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3491723 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3492010 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3492289 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3492570 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3492813 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3493018 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3493209 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3493470 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3493749 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3494033 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3494321 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3494600 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3494882 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3495243 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3495487 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3495835 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3496333 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3496650 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3496798 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3497441 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3497748 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3498279 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3498805 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3499113 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3499636 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3500167 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3500510 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3500991 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3501529 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3501872 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3502353 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3502882 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3503194 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3503705 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3504243 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3504563 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3505064 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3505605 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3505918 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3506426 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3506955 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3507257 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3507781 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3508210 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3508696 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3509211 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3509749 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3510210 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3510593 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3511114 00:08:55.137 Removing: /var/run/dpdk/spdk_pid3511648 00:08:55.137 Clean 00:08:55.396 02:49:45 -- common/autotest_common.sh@1447 -- # return 0 00:08:55.396 02:49:45 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:08:55.396 02:49:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.396 02:49:45 -- common/autotest_common.sh@10 -- # set +x 00:08:55.396 02:49:46 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:08:55.396 02:49:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.396 02:49:46 -- common/autotest_common.sh@10 -- # set +x 00:08:55.396 02:49:46 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:55.396 02:49:46 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:55.396 02:49:46 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:55.396 02:49:46 -- spdk/autotest.sh@389 -- # hash lcov 00:08:55.396 02:49:46 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:55.396 02:49:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:55.396 02:49:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:55.396 02:49:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.396 02:49:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.396 02:49:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.396 02:49:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.396 02:49:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.396 02:49:46 -- paths/export.sh@5 -- $ export PATH 00:08:55.396 02:49:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.396 02:49:46 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:55.396 02:49:46 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:55.396 02:49:46 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715561386.XXXXXX 00:08:55.396 02:49:46 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715561386.mh4rng 00:08:55.396 02:49:46 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:55.396 02:49:46 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:08:55.396 02:49:46 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:55.397 02:49:46 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:08:55.397 02:49:46 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:55.397 02:49:46 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:55.397 02:49:46 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:55.397 02:49:46 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:08:55.397 02:49:46 -- common/autotest_common.sh@10 -- $ set +x 00:08:55.397 02:49:46 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:08:55.397 02:49:46 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:55.397 02:49:46 -- pm/common@17 -- $ local monitor 00:08:55.397 02:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.397 02:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.397 02:49:46 -- pm/common@21 -- $ date +%s 00:08:55.397 02:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.397 02:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.397 02:49:46 -- pm/common@21 -- $ date +%s 00:08:55.397 02:49:46 -- pm/common@25 -- $ sleep 1 00:08:55.397 02:49:46 -- pm/common@21 -- $ date +%s 00:08:55.397 02:49:46 -- pm/common@21 -- $ date +%s 00:08:55.397 02:49:46 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715561386 00:08:55.397 02:49:46 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715561386 00:08:55.397 02:49:46 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715561386 00:08:55.397 02:49:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715561386 00:08:55.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715561386_collect-vmstat.pm.log 00:08:55.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715561386_collect-cpu-load.pm.log 00:08:55.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715561386_collect-cpu-temp.pm.log 00:08:55.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715561386_collect-bmc-pm.bmc.pm.log 00:08:56.592 02:49:47 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:56.592 02:49:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:08:56.592 02:49:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:56.592 02:49:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:56.592 02:49:47 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:08:56.592 02:49:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:56.592 02:49:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:56.592 02:49:47 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:56.592 02:49:47 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:56.592 02:49:47 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:56.592 02:49:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:56.592 02:49:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:56.592 02:49:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:56.592 02:49:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:56.592 02:49:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.592 02:49:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:56.592 02:49:47 -- pm/common@44 -- $ pid=3518424 00:08:56.592 02:49:47 -- pm/common@50 -- $ kill -TERM 3518424 00:08:56.592 02:49:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.592 02:49:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:56.592 02:49:47 -- pm/common@44 -- $ pid=3518427 00:08:56.592 02:49:47 -- pm/common@50 -- $ kill -TERM 3518427 00:08:56.592 02:49:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.592 02:49:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:56.592 02:49:47 -- pm/common@44 -- $ pid=3518431 00:08:56.592 02:49:47 -- pm/common@50 -- $ kill -TERM 3518431 00:08:56.592 02:49:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:56.592 02:49:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:56.592 02:49:47 -- pm/common@44 -- $ pid=3518471 00:08:56.592 02:49:47 -- pm/common@50 -- $ sudo -E kill -TERM 3518471 00:08:56.592 + [[ -n 3355727 ]] 00:08:56.592 + sudo kill 3355727 00:08:56.603 [Pipeline] } 00:08:56.622 [Pipeline] // stage 00:08:56.627 [Pipeline] } 00:08:56.644 [Pipeline] // timeout 00:08:56.650 [Pipeline] } 00:08:56.666 [Pipeline] // catchError 00:08:56.671 [Pipeline] } 00:08:56.691 [Pipeline] // wrap 00:08:56.696 [Pipeline] } 00:08:56.712 [Pipeline] // catchError 00:08:56.722 [Pipeline] stage 00:08:56.725 [Pipeline] { (Epilogue) 00:08:56.740 [Pipeline] catchError 00:08:56.741 [Pipeline] { 00:08:56.755 [Pipeline] echo 00:08:56.757 Cleanup processes 00:08:56.762 [Pipeline] sh 00:08:57.048 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:57.048 3431991 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561061 00:08:57.048 3432036 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715561061 00:08:57.048 3518596 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:57.048 3519396 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:57.116 [Pipeline] sh 00:08:57.409 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:57.409 ++ grep -v 'sudo pgrep' 00:08:57.409 ++ awk '{print $1}' 00:08:57.409 + sudo kill -9 3431991 3432036 3518596 00:08:57.420 [Pipeline] sh 00:08:57.702 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:57.702 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:57.702 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:59.091 [Pipeline] sh 00:08:59.375 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:59.375 Artifacts sizes are good 00:08:59.391 [Pipeline] archiveArtifacts 00:08:59.398 Archiving artifacts 00:08:59.450 [Pipeline] sh 00:08:59.733 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:59.747 [Pipeline] cleanWs 00:08:59.758 [WS-CLEANUP] Deleting project workspace... 00:08:59.758 [WS-CLEANUP] Deferred wipeout is used... 00:08:59.765 [WS-CLEANUP] done 00:08:59.767 [Pipeline] } 00:08:59.787 [Pipeline] // catchError 00:08:59.800 [Pipeline] sh 00:09:00.080 + logger -p user.info -t JENKINS-CI 00:09:00.089 [Pipeline] } 00:09:00.106 [Pipeline] // stage 00:09:00.112 [Pipeline] } 00:09:00.159 [Pipeline] // node 00:09:00.163 [Pipeline] End of Pipeline 00:09:00.188 Finished: SUCCESS